From f85ad563205975d436fd2987988416dcc324f2ee Mon Sep 17 00:00:00 2001 From: WANXIN JIN Date: Sat, 9 Nov 2024 08:46:53 -0700 Subject: [PATCH] update --- ...40b622142f1c98125abcfe89a76a661b0e8e343910 | 2 +- ...4a9aeb9750f4e86ca9413f8d9019050aa1dc0aa038 | 402 ++++++++++++ ...04a1d544028069966f3aa2fe0b823943b7bc63f840 | 148 +++++ ...a9ed462ade2c9581e9338d54035e1c5849d4e06982 | 148 +++++ ...0be835fb92d7d090e05b408466aa485828991d489c | 402 ++++++++++++ ...58770d97e010f26036cbd4a48d2acd0469d67a9d1b | 612 ------------------ ...8711d0f7ac14e01a063126ea8af5428bba512b537} | 4 +- ...8187d12b67353e9515b75cb0b6250807ca02a0eba0 | 148 +++++ ...62d3337d1cd231e2303e12fc32029098269afdf50e | 148 +++++ ...b6414450733746fa32a3575071b882909c92b1d184 | 612 ------------------ ...0067658bfbd690954018d85949fd8c244233b6f24c | 148 +++++ ...a024970e8d5aa4c00cf874c076c3be566df600423} | 8 +- ...0511bbf2c30bce8213c84d1271ba61af98c23ced7a | 79 --- ...d9d963dd40bd1dc8e7b93b59e7408f79736a7610f4 | 148 +++++ ...5b735eb0aca6cce3b2e455037d70e4d988ec25065b | 402 ++++++++++++ ...a53a626f44c4b7e4b01d18aabddb75d3127113a58} | 8 +- ...72eae27a33d76e2fce11735a172977a42f00545862 | 400 ++++++++++++ ...6d1a24f51f6a8c0ddf7d15a8960cd68fab5ceb2baa | 402 ++++++++++++ ...c7a997484177ac4dbfc6bf3a60d06b6718e49ab1d4 | 148 +++++ ...2a5a12300bd93e879ccd26cf2de50ba3f967c72b84 | 612 ------------------ ...8194c01b492ddee8c4b171760b769fe803f89ab964 | 148 +++++ ...f5be56a356c2822762d91adeb71970e768398b7d14 | 400 ++++++++++++ ...4ac175a56a07afca01c371824b2a44470221128d16 | 148 +++++ ...6e16fc3945525e792826c2d78a0f9469f78fef5b85 | 148 +++++ ...7d492967f23a4815a1e907ba34d8675f729a31edbb | 148 +++++ ...7929509e968e7dbc43cdb0de759703e2d38cdba969 | 148 +++++ ...620fdc5c90b7e7e927b5b84c1a94016c715caeb571 | 148 +++++ _pages/0.about.md | 10 +- _pages/1.research.md | 14 +- _pages/2.publications.md | 4 +- _site/CNAME | 1 + _site/feed.xml | 2 +- _site/index.html | 12 +- _site/joining/index.html | 2 +- _site/people/index.html | 2 +- _site/posts/index.html | 2 +- _site/publications/index.html | 6 +- _site/research/index.html | 16 +- _site/robots/index.html | 2 +- _site/teaching/index.html | 2 +- 40 files changed, 4379 insertions(+), 1965 deletions(-) create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/04/861f91c2aa0ea0c273344a9aeb9750f4e86ca9413f8d9019050aa1dc0aa038 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/09/03080141bad686fd4bb004a1d544028069966f3aa2fe0b823943b7bc63f840 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/0a/5217a25297a82b96efd6a9ed462ade2c9581e9338d54035e1c5849d4e06982 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/0e/25c61d85d04565a70b150be835fb92d7d090e05b408466aa485828991d489c delete mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/29/e861df560fdb4a8a828958770d97e010f26036cbd4a48d2acd0469d67a9d1b rename .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/{12/2ee68f507a6ea3e299d5a9a36ea75e461903b6697aa5cbf25e2e4aac01a7d3 => 2a/8a064e6e7ee5103484b988711d0f7ac14e01a063126ea8af5428bba512b537} (95%) create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/3e/6db327fed4d4110f30198187d12b67353e9515b75cb0b6250807ca02a0eba0 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/67/43d7a64a7111fb4f44eb62d3337d1cd231e2303e12fc32029098269afdf50e delete mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/70/b43bf0813a4f14020da0b6414450733746fa32a3575071b882909c92b1d184 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/7b/334601b0d0399914c8620067658bfbd690954018d85949fd8c244233b6f24c rename .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/{1e/843110d55507591d160cc2f9c3a49bc237a3aecfde8d02c4f20e93cf4b1f56 => 81/38a47f3f76da7c9e435c0a024970e8d5aa4c00cf874c076c3be566df600423} (98%) delete mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/86/ca7b3b0236b8d4cb59d30511bbf2c30bce8213c84d1271ba61af98c23ced7a create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/89/751d124f0ebbcf8341cdd9d963dd40bd1dc8e7b93b59e7408f79736a7610f4 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/8e/cbc5bfb6e5f95f5747035b735eb0aca6cce3b2e455037d70e4d988ec25065b rename .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/{ce/236edb8bfa494aa84140ec02bb3ffe7631a2b6bfcedd06c0dab9e87a821e9d => 94/50f97824074972e40f9eea53a626f44c4b7e4b01d18aabddb75d3127113a58} (99%) create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/9f/3965ecad97495d5df62872eae27a33d76e2fce11735a172977a42f00545862 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/a9/c99e9ec64d061d5371f66d1a24f51f6a8c0ddf7d15a8960cd68fab5ceb2baa create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/ad/63603405353efaff7322c7a997484177ac4dbfc6bf3a60d06b6718e49ab1d4 delete mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/b2/7e7a113eaca0a2a5bf792a5a12300bd93e879ccd26cf2de50ba3f967c72b84 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/b3/4a2a3a4942cf4ba492fe8194c01b492ddee8c4b171760b769fe803f89ab964 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/c6/94b886cb8c22dd8ce899f5be56a356c2822762d91adeb71970e768398b7d14 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/cb/d2f5f2df8897588c5e634ac175a56a07afca01c371824b2a44470221128d16 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/da/b08acf0b9431df4e01006e16fc3945525e792826c2d78a0f9469f78fef5b85 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/e5/95a1007c75016db20e2f7d492967f23a4815a1e907ba34d8675f729a31edbb create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/e7/0c957fedd863e5d4c80c7929509e968e7dbc43cdb0de759703e2d38cdba969 create mode 100644 .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/ef/ca00114274981aa68bfa620fdc5c90b7e7e927b5b84c1a94016c715caeb571 create mode 100644 _site/CNAME diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Cache/b7/9606fb3afea5bd1609ed40b622142f1c98125abcfe89a76a661b0e8e343910 b/.jekyll-cache/Jekyll/Cache/Jekyll--Cache/b7/9606fb3afea5bd1609ed40b622142f1c98125abcfe89a76a661b0e8e343910 index f8a789a..eb1fdb0 100644 --- a/.jekyll-cache/Jekyll/Cache/Jekyll--Cache/b7/9606fb3afea5bd1609ed40b622142f1c98125abcfe89a76a661b0e8e343910 +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Cache/b7/9606fb3afea5bd1609ed40b622142f1c98125abcfe89a76a661b0e8e343910 @@ -1 +1 @@ -I"{"source"=>"/Users/wanxinjin/Public/ASU Dropbox/Wanxin Jin/lab/website/asu-iris.github.io/asu-iris.github.io", "destination"=>"/Users/wanxinjin/Public/ASU Dropbox/Wanxin Jin/lab/website/asu-iris.github.io/asu-iris.github.io/_site", "collections_dir"=>"", "cache_dir"=>".jekyll-cache", "plugins_dir"=>"_plugins", "layouts_dir"=>"_layouts", "data_dir"=>"_data", "includes_dir"=>"_includes", "collections"=>{"posts"=>{"output"=>true, "permalink"=>"/:categories/:year/:month/:day/:title:output_ext"}}, "safe"=>false, "include"=>["_pages"], "exclude"=>["bin", "Gemfile", "Gemfile.lock", "vendor", ".sass-cache", ".jekyll-cache", "gemfiles", "node_modules", "vendor/bundle/", "vendor/cache/", "vendor/gems/", "vendor/ruby/"], "keep_files"=>["CNAME", ".nojekyll", ".git"], "encoding"=>"utf-8", "markdown_ext"=>"markdown,mkdown,mkdn,mkd,md", "strict_front_matter"=>false, "show_drafts"=>nil, "limit_posts"=>0, "future"=>false, "unpublished"=>false, "whitelist"=>[], "plugins"=>[], "markdown"=>"kramdown", "highlighter"=>"rouge", "lsi"=>false, "excerpt_separator"=>"\n\n", "incremental"=>false, "detach"=>false, "port"=>"4000", "host"=>"127.0.0.1", "baseurl"=>nil, "show_dir_listing"=>false, "permalink"=>"date", "paginate_path"=>"/page:num", "timezone"=>nil, "quiet"=>false, "verbose"=>false, "defaults"=>[], "liquid"=>{"error_mode"=>"warn", "strict_filters"=>false, "strict_variables"=>false}, "kramdown"=>{"auto_ids"=>true, "toc_levels"=>[1, 2, 3, 4, 5, 6], "entity_output"=>"as_char", "smart_quotes"=>"lsquo,rsquo,ldquo,rdquo", "input"=>"GFM", "hard_wrap"=>false, "guess_lang"=>true, "footnote_nr"=>1, "show_warnings"=>false, "syntax_highlighter_opts"=>{:css_class=>"highlight", :span=>{"line_numbers"=>false}, :block=>{"line_numbers"=>false, "start_line"=>1}, :default_lang=>"plaintext", :guess_lang=>true}, "syntax_highlighter"=>"rouge", "coderay"=>{}}, "title"=>"blank", "first_name"=>nil, "middle_name"=>nil, "last_name"=>"IRIS Lab", "email"=>"wanxinjin@gmail.com", "description"=>#, "footer_text"=>nil, "icon"=>nil, "url"=>"http://localhost:4000", "last_updated"=>9.2012, "impressum_path"=>nil, "navbar_fixed"=>true, "footer_fixed"=>true, "max_width"=>"900px", "serve_og_meta"=>false, "og_image"=>nil, "github_username"=>nil, "gitlab_username"=>nil, "twitter_username"=>nil, "linkedin_username"=>nil, "scholar_userid"=>nil, "orcid_id"=>nil, "medium_username"=>nil, "quora_username"=>nil, "publons_id"=>nil, "research_gate_profile"=>nil, "blogger_url"=>nil, "work_url"=>nil, "keybase_username"=>nil, "wikidata_id"=>nil, "dblp_url"=>nil, "stackoverflow_id"=>nil, "contact_note"=>nil, "google_analytics"=>"UA-XXXXXXXXX", "panelbear_analytics"=>"XXXXXXXXX", "highlight_theme"=>"github", "github"=>["metadata"], "enable_google_analytics"=>false, "enable_panelbear_analytics"=>false, "enable_mansory"=>true, "enable_math"=>true, "enable_tooltips"=>false, "enable_darkmode"=>false, "enable_navbar_social"=>false, "enable_project_categories"=>false, "enable_medium_zoom"=>false, "academicons"=>{"version"=>"1.9.0", "integrity"=>"sha512-W4yqoT1+8NLkinBLBZko+dFB2ZbHsYLDdr50VElllRcNt2Q4/GSs6u71UHKxB7S6JEMCp5Ve4xjh3eGQl/HRvg=="}, "bootstrap"=>{"version"=>"4.5.2", "integrity"=>{"css"=>"sha512-MoRNloxbStBcD8z3M/2BmnT+rg4IsMxPkXaGh2zD6LGNNFE80W3onsAhRcMAMrSoyWL9xD7Ert0men7vR8LUZg==", "js"=>"sha512-M5KW3ztuIICmVIhjSqXe01oV2bpe248gOxqmlcYrEzAvws7Pw3z6BK0iGbrwvdrUQUhi3eXgtxp5I8PDo9YfjQ=="}}, "fontawesome"=>{"version"=>"5.14.0", "integrity"=>"sha512-1PKOgIY59xJ8Co8+NE6FZ+LOAZKjy+KY8iq0G4B3CyeY6wYHN3yt9PW0XpSriVlkMXe40PTKnXrLnZ9+fkDaog=="}, "jquery"=>{"version"=>"3.5.1", "integrity"=>"sha512-bLT0Qm9VnAYZDflyKcBaQ2gg0hSYNQrJ8RilYldYQ1FxQYoCLtUjuuRuZo+fjqhx/qtq/1itJ0C2ejDxltZVFg=="}, "mathjax"=>{"version"=>"3.2.0"}, "mansory"=>{"version"=>"4.2.2", "integrity"=>"sha256-Nn1q/fx0H7SNLZMQ5Hw5JLaTRZp0yILA/FRexe19VdI="}, "mdb"=>{"version"=>"4.19.1", "integrity"=>{"css"=>"sha512-RO38pBRxYH3SoOprtPTD86JFOclM51/XTIdEPh5j8sj4tp8jmQIx26twG52UaLi//hQldfrh7e51WzP9wuP32Q==", "js"=>"sha512-Mug9KHKmroQFMLm93zGrjhibM2z2Obg9l6qFG2qKjXEXkMp/VDkI4uju9m4QKPjWSwQ6O2qzZEnJDEeCw0Blcw=="}}, "popper"=>{"version"=>"2.4.4", "integrity"=>"sha512-eUQ9hGdLjBjY3F41CScH3UX+4JDSI9zXeroz7hJ+RteoCaY+GP/LDoM8AO+Pt+DRFw3nXqsjh9Zsts8hnYv8/A=="}, "medium_zoom"=>{"version"=>"1.0.6", "integrity"=>"sha256-EdPgYcPk/IIrw7FYeuJQexva49pVRZNmt3LculEr7zM="}, "livereload_port"=>35729, "serving"=>true, "watch"=>true, "scholar"=>{"style"=>"apa", "locale"=>"en", "sort_by"=>"none", "order"=>"ascending", "group_by"=>"none", "group_order"=>"ascending", "bibliography_group_tag"=>"h2,h3,h4,h5", "bibliography_list_tag"=>"ol", "bibliography_item_tag"=>"li", "bibliography_list_attributes"=>{}, "bibliography_item_attributes"=>{}, "source"=>"./_bibliography", "bibliography"=>"references.bib", "repository"=>nil, "repository_file_delimiter"=>".", "bibtex_options"=>{:strip=>false, :parse_months=>true}, "bibtex_filters"=>[:smallcaps, :superscript, :italics, :textit, :lowercase, :textregistered, :tiny, :latex], "raw_bibtex_filters"=>[], "bibtex_skip_fields"=>[:abstract, :month_numeric], "bibtex_quotes"=>["{", "}"], "replace_strings"=>true, "join_strings"=>true, "remove_duplicates"=>false, "details_dir"=>"bibliography", "details_layout"=>"bibtex.html", "details_link"=>"Details", "details_permalink"=>"/:details_dir/:key:extension", "bibliography_class"=>"bibliography", "bibliography_template"=>"{{reference}}", "reference_tagname"=>"span", "missing_reference"=>"(missing reference)", "details_link_class"=>"details", "query"=>"@*", "cite_class"=>"citation", "type_names"=>{"article"=>"Journal Articles", "book"=>"Books", "incollection"=>"Book Chapters", "inproceedings"=>"Conference Articles", "thesis"=>"Theses", "mastersthesis"=>"Master's Theses", "phdthesis"=>"PhD Theses", "manual"=>"Manuals", "techreport"=>"Technical Reports", "misc"=>"Miscellaneous", "unpublished"=>"Unpublished"}, "type_aliases"=>{"phdthesis"=>"thesis", "mastersthesis"=>"thesis"}, "type_order"=>[], "month_names"=>nil}}:ET \ No newline at end of file +I"{"source"=>"/Users/wanxinjin/Public/ASU Dropbox/Wanxin Jin/lab/website/asu-iris.github.io/asu-iris.github.io", "destination"=>"/Users/wanxinjin/Public/ASU Dropbox/Wanxin Jin/lab/website/asu-iris.github.io/asu-iris.github.io/_site", "collections_dir"=>"", "cache_dir"=>".jekyll-cache", "plugins_dir"=>"_plugins", "layouts_dir"=>"_layouts", "data_dir"=>"_data", "includes_dir"=>"_includes", "collections"=>{"posts"=>{"output"=>true, "permalink"=>"/:categories/:year/:month/:day/:title:output_ext"}}, "safe"=>false, "include"=>["_pages"], "exclude"=>["bin", "Gemfile", "Gemfile.lock", "vendor", ".sass-cache", ".jekyll-cache", "gemfiles", "node_modules", "vendor/bundle/", "vendor/cache/", "vendor/gems/", "vendor/ruby/"], "keep_files"=>["CNAME", ".nojekyll", ".git"], "encoding"=>"utf-8", "markdown_ext"=>"markdown,mkdown,mkdn,mkd,md", "strict_front_matter"=>false, "show_drafts"=>nil, "limit_posts"=>0, "future"=>false, "unpublished"=>false, "whitelist"=>[], "plugins"=>[], "markdown"=>"kramdown", "highlighter"=>"rouge", "lsi"=>false, "excerpt_separator"=>"\n\n", "incremental"=>false, "detach"=>false, "port"=>"4000", "host"=>"127.0.0.1", "baseurl"=>nil, "show_dir_listing"=>false, "permalink"=>"date", "paginate_path"=>"/page:num", "timezone"=>nil, "quiet"=>false, "verbose"=>false, "defaults"=>[], "liquid"=>{"error_mode"=>"warn", "strict_filters"=>false, "strict_variables"=>false}, "kramdown"=>{"auto_ids"=>true, "toc_levels"=>[1, 2, 3, 4, 5, 6], "entity_output"=>"as_char", "smart_quotes"=>"lsquo,rsquo,ldquo,rdquo", "input"=>"GFM", "hard_wrap"=>false, "guess_lang"=>true, "footnote_nr"=>1, "show_warnings"=>false, "syntax_highlighter_opts"=>{:css_class=>"highlight", :span=>{"line_numbers"=>false}, :block=>{"line_numbers"=>false, "start_line"=>1}, :default_lang=>"plaintext", :guess_lang=>true}, "syntax_highlighter"=>"rouge", "coderay"=>{}}, "title"=>"blank", "first_name"=>nil, "middle_name"=>nil, "last_name"=>"IRIS Lab", "email"=>"wanxinjin@gmail.com", "description"=>#, "footer_text"=>nil, "icon"=>nil, "url"=>"http://localhost:4000", "last_updated"=>9.2012, "impressum_path"=>nil, "navbar_fixed"=>true, "footer_fixed"=>true, "max_width"=>"900px", "serve_og_meta"=>false, "og_image"=>nil, "github_username"=>nil, "gitlab_username"=>nil, "twitter_username"=>nil, "linkedin_username"=>nil, "scholar_userid"=>nil, "orcid_id"=>nil, "medium_username"=>nil, "quora_username"=>nil, "publons_id"=>nil, "research_gate_profile"=>nil, "blogger_url"=>nil, "work_url"=>nil, "keybase_username"=>nil, "wikidata_id"=>nil, "dblp_url"=>nil, "stackoverflow_id"=>nil, "contact_note"=>nil, "google_analytics"=>"UA-XXXXXXXXX", "panelbear_analytics"=>"XXXXXXXXX", "highlight_theme"=>"github", "github"=>["metadata"], "enable_google_analytics"=>false, "enable_panelbear_analytics"=>false, "enable_mansory"=>true, "enable_math"=>true, "enable_tooltips"=>false, "enable_darkmode"=>false, "enable_navbar_social"=>false, "enable_project_categories"=>false, "enable_medium_zoom"=>false, "academicons"=>{"version"=>"1.9.0", "integrity"=>"sha512-W4yqoT1+8NLkinBLBZko+dFB2ZbHsYLDdr50VElllRcNt2Q4/GSs6u71UHKxB7S6JEMCp5Ve4xjh3eGQl/HRvg=="}, "bootstrap"=>{"version"=>"4.5.2", "integrity"=>{"css"=>"sha512-MoRNloxbStBcD8z3M/2BmnT+rg4IsMxPkXaGh2zD6LGNNFE80W3onsAhRcMAMrSoyWL9xD7Ert0men7vR8LUZg==", "js"=>"sha512-M5KW3ztuIICmVIhjSqXe01oV2bpe248gOxqmlcYrEzAvws7Pw3z6BK0iGbrwvdrUQUhi3eXgtxp5I8PDo9YfjQ=="}}, "fontawesome"=>{"version"=>"5.14.0", "integrity"=>"sha512-1PKOgIY59xJ8Co8+NE6FZ+LOAZKjy+KY8iq0G4B3CyeY6wYHN3yt9PW0XpSriVlkMXe40PTKnXrLnZ9+fkDaog=="}, "jquery"=>{"version"=>"3.5.1", "integrity"=>"sha512-bLT0Qm9VnAYZDflyKcBaQ2gg0hSYNQrJ8RilYldYQ1FxQYoCLtUjuuRuZo+fjqhx/qtq/1itJ0C2ejDxltZVFg=="}, "mathjax"=>{"version"=>"3.2.0"}, "mansory"=>{"version"=>"4.2.2", "integrity"=>"sha256-Nn1q/fx0H7SNLZMQ5Hw5JLaTRZp0yILA/FRexe19VdI="}, "mdb"=>{"version"=>"4.19.1", "integrity"=>{"css"=>"sha512-RO38pBRxYH3SoOprtPTD86JFOclM51/XTIdEPh5j8sj4tp8jmQIx26twG52UaLi//hQldfrh7e51WzP9wuP32Q==", "js"=>"sha512-Mug9KHKmroQFMLm93zGrjhibM2z2Obg9l6qFG2qKjXEXkMp/VDkI4uju9m4QKPjWSwQ6O2qzZEnJDEeCw0Blcw=="}}, "popper"=>{"version"=>"2.4.4", "integrity"=>"sha512-eUQ9hGdLjBjY3F41CScH3UX+4JDSI9zXeroz7hJ+RteoCaY+GP/LDoM8AO+Pt+DRFw3nXqsjh9Zsts8hnYv8/A=="}, "medium_zoom"=>{"version"=>"1.0.6", "integrity"=>"sha256-EdPgYcPk/IIrw7FYeuJQexva49pVRZNmt3LculEr7zM="}, "livereload_port"=>35729, "serving"=>true, "watch"=>true, "scholar"=>{"style"=>"apa", "locale"=>"en", "sort_by"=>"none", "order"=>"ascending", "group_by"=>"none", "group_order"=>"ascending", "bibliography_group_tag"=>"h2,h3,h4,h5", "bibliography_list_tag"=>"ol", "bibliography_item_tag"=>"li", "bibliography_list_attributes"=>{}, "bibliography_item_attributes"=>{}, "source"=>"./_bibliography", "bibliography"=>"references.bib", "repository"=>nil, "repository_file_delimiter"=>".", "bibtex_options"=>{:strip=>false, :parse_months=>true}, "bibtex_filters"=>[:smallcaps, :superscript, :italics, :textit, :lowercase, :textregistered, :tiny, :latex], "raw_bibtex_filters"=>[], "bibtex_skip_fields"=>[:abstract, :month_numeric], "bibtex_quotes"=>["{", "}"], "replace_strings"=>true, "join_strings"=>true, "remove_duplicates"=>false, "details_dir"=>"bibliography", "details_layout"=>"bibtex.html", "details_link"=>"Details", "details_permalink"=>"/:details_dir/:key:extension", "bibliography_class"=>"bibliography", "bibliography_template"=>"{{reference}}", "reference_tagname"=>"span", "missing_reference"=>"(missing reference)", "details_link_class"=>"details", "query"=>"@*", "cite_class"=>"citation", "type_names"=>{"article"=>"Journal Articles", "book"=>"Books", "incollection"=>"Book Chapters", "inproceedings"=>"Conference Articles", "thesis"=>"Theses", "mastersthesis"=>"Master's Theses", "phdthesis"=>"PhD Theses", "manual"=>"Manuals", "techreport"=>"Technical Reports", "misc"=>"Miscellaneous", "unpublished"=>"Unpublished"}, "type_aliases"=>{"phdthesis"=>"thesis", "mastersthesis"=>"thesis"}, "type_order"=>[], "month_names"=>nil}}:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/04/861f91c2aa0ea0c273344a9aeb9750f4e86ca9413f8d9019050aa1dc0aa038 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/04/861f91c2aa0ea0c273344a9aeb9750f4e86ca9413f8d9019050aa1dc0aa038 new file mode 100644 index 0000000..c4affaa --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/04/861f91c2aa0ea0c273344a9aeb9750f4e86ca9413f8d9019050aa1dc0aa038 @@ -0,0 +1,402 @@ +I" function toggleFoldableSection(element) {element.parentElement.classList.toggle("active");} + +

+ +

The IRIS lab focuses on three reserach directions: (1) human-autonomy alignment, (2) contact-rich dexterous manipulation, and (3) fundamental methods in robotics. Below are some recent publications in each set of research interest. +Please visit Publications page for a full list of publications.

+ +


+
+
Human-autonomy alignment
+
+ +we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions. + +
+
+
    +
  • Robot learning from general human interactions
  • +
  • Planning and control for human-robot systems
  • +
+
+ + + +
+ +
+
Language-Model-Assisted Bi-Level Programming for Reward Learning from Internet Videos
+
Harsh Mahesheka, Zhixian Xie, Zhaoran Wang, Wanxin Jin
+
arXiv preprint, 2024
+ +
+
+
+ + + +
+ +
+
Safe MPC Alignment with Human Directional Feedback
+
Zhixian Xie, Wenlong Zhang, Yi Ren, Zhaoran Wang, George. J. Pappas and Wanxin Jin
+
Submitted to IEEE Transactions on Robotics (T-RO), 2024
+ +
+
+
+ + + +
+ +
+
Learning from Human Directional Corrections
+
Wanxin Jin, Todd D Murphey, Zehui Lu, and Shaoshuai Mou
+
IEEE Transactions on Robotics (T-RO), 2023
+ +
+
+
+ + +
+ +
+
Learning from Sparse Demonstrations
+
Wanxin Jin, Todd D Murphey, Dana Kulic, Neta Ezer, and Shaoshuai Mou
+
IEEE Transactions on Robotics (T-RO), 2023
+ +
+
+
+ + + + +
+ +
+
Inverse Optimal Control from Incomplete Trajectory Observations
+
Wanxin Jin, Dana Kulic, Shaoshuai Mou, and Sandra Hirche
+
International Journal of Robotics Research (IJRR), 40:848–865, 2021
+ +
+
+
+ + + + +
+ +
+
Inverse Optimal Control for Multiphase cost functions
+
Wanxin Jin, Dana Kulic, Jonathan Lin, Shaoshuai Mou, and Sandra Hirche
+
IEEE Transactions on Robotics (T-RO), 35(6):1387–1398, 2019
+ +
+
+ + + +
+
+ +


+ +
+
Contact-rich manipulation
+
+ + +We aim to leverage physical principles to develop efficient representations or models for robot's physical interaction with environments. We also focus on developing algorithms to enable robots efficiently and robustly manipulate their surroundings/objects through contact. + +

+ +
    +
  • Learning, planning, and control for contact-rich manipulation
  • +
  • Computer vision and learnable geometry for dexterous manipulation
  • +
+
+ + + +
+ +
+
ContactSDF: Signed Distance Functions as Multi-Contact Models for Dexterous Manipulation
+
Wen Yang and Wanxin Jin
+
Submitted to IEEE Robotics and Automation Letters (RA-L), 2024
+ +
+
+
+ + +
+ +
+
Complementarity-Free Multi-Contact Modeling and Optimization for Dexterous Manipulation
+
Wanxin Jin
+
arXiv preprint, 2024
+ +
+
+
+ + + +
+ +
+
Task-Driven Hybrid Model Reduction for Dexterous Manipulation
+
Wanxin Jin and Michael Posa
+
IEEE Transactions on Robotics (T-RO), 2024
+ +
+
+
+ + + + +
+ +
+
Adaptive Contact-Implicit Model Predictive Control with Online Residual Learning
+
Wei-Cheng Huang, Alp Aydinoglu, Wanxin Jin, Michael Posa
+
IEEE International Conference on Robotics and Automation (ICRA), 2024
+ +
+
+
+ + + +
+ +
+
Adaptive Barrier Smoothing for First-Order Policy Gradient with Contact Dynamics
+
Shenao Zhang, Wanxin Jin, Zhaoran Wang
+
International Conference on Machine Learning (ICML), 2023
+ +
+
+
+ + + +
+ +
+
Learning Linear Complementarity Systems
+
Wanxin Jin, Alp Aydinoglu, Mathew Halm, and Michael Posa
+
Learning for Dynamics and Control (L4DC), 2022
+ +
+
+ + + +
+
+ +


+ +
+
Fundamental methods in robotics
+
+ +We focus on developing fundamental theories and algorithms for achieving efficient, safe, and robust robot intelligence. Our methods lie at the intersection of model-based (control and optimization) and data-driven approaches, harnessing the complementary benefits of both. + +

+ +
    +
  • Optimal control, motion plannig, reinforcement learning
  • +
  • Differentiable optimization, inverse optimization
  • +
  • Hybrid system learning and control
  • +
+
+ + + +
+ +
+
Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
+
Wanxin Jin, Zhaoran Wang, Zhuoran Yang, and Shaoshuai Mou
+
Advances in Neural Information Processing Systems (NeurIPS), 2020
+ +
+
+
+ + +
+ +
+
Safe Pontryagin Differentiable Programming
+
Wanxin Jin, Shaoshuai Mou, and George J. Pappas
+
Advances in Neural Information Processing Systems (NeurIPS), 2021
+ +
+
+
+ + +
+ +
+
Robust Safe Learning and Control in Unknown Environments: An Uncertainty-Aware Control Barrier Function Approach
+
Jiacheng Li, Qingchen Liu, Wanxin Jin, Jiahu Qin, and Sandra Hirche
+
IEEE Robotics and Automation Letters (RA-L), 2023
+ +
+
+
+ + + + +
+ +
+
Enforcing Hard Constraints with Soft Barriers: Safe-driven Reinforcement Learning in Unknown Stochastic Environments
+
Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu
+
International Conference on Machine Learning (ICML), 2023
+ +
+
+
+ + +
+ +
+
A Differential Dynamic Programming Framework for Inverse Reinforcement Learning
+
Kun Cao, Xinhang Xu, Wanxin Jin, Karl H. Johansson, and Lihua Xie
+
Submitted to IEEE Transactions on Robotics (T-RO), 2024
+ +
+
+ + +
+
+ + + + + + +

+ + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/09/03080141bad686fd4bb004a1d544028069966f3aa2fe0b823943b7bc63f840 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/09/03080141bad686fd4bb004a1d544028069966f3aa2fe0b823943b7bc63f840 new file mode 100644 index 0000000..76d9013 --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/09/03080141bad686fd4bb004a1d544028069966f3aa2fe0b823943b7bc63f840 @@ -0,0 +1,148 @@ +I"#

This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

+ +
    +
  • +

    Human-autonomy alignment: we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

    +
  • +
  • +

    Contact-rich dexterous manipulation: we develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects.

    +
  • +
  • +

    Fundamental methods for robot autonomy: We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches.

    +
  • +
+ +

+ +
+ +     + +     + +     + +     + +     + +
+ +


+ +

Recent Updates

+ +

+ +
+ + +
+
Oct 15, 2024
+
+

+ 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

+
+ +
+
+ Check out the preprint. Here is a long demo: +

+
+ +
+
+
+ + + +


+ + + + +
+
Aug 24, 2024
+
+

+ 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

+
+ +
+
+ Check out the webpage, preprint, and code. Here is a long demo: +

+
+ +
+
+
+ + + +


+ +
+
Aug 19, 2024
+
+

+ Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? YES! The key lies in our new "effective yet optimization-friendly multi-contact model." +

+

🔥 Thrilled to unveil our work: "Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

+ Our method sets a new benchmark in dexterous manipulation: +
    +
  • 🎯 A 96.5% success rate across all tasks
  • +
  • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
  • +
  • 🚀 Model predictive control running at 50-100 Hz for all tasks
  • +
+
+ +
+
+ Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

+
+ +
+
+
+ + + +


+
+
July 9 2024
+
+

+ 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

+ Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
+ +
+
+ Check out the project website, preprint, and a breaf introduction vide below. +
+
+ +
+
+
+ + +
+ +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/0a/5217a25297a82b96efd6a9ed462ade2c9581e9338d54035e1c5849d4e06982 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/0a/5217a25297a82b96efd6a9ed462ade2c9581e9338d54035e1c5849d4e06982 new file mode 100644 index 0000000..5de72a2 --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/0a/5217a25297a82b96efd6a9ed462ade2c9581e9338d54035e1c5849d4e06982 @@ -0,0 +1,148 @@ +I"K$

This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

+ +
    +
  • +

    Human-autonomy alignment: we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

    +
  • +
  • +

    Contact-rich dexterous manipulation: we develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects.

    +
  • +
  • +

    Fundamental methods for robot autonomy: We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches.

    +
  • +
+ +

+ +
+ +     + +     + +     + +     + +     + +
+ +


+ +

Recent Updates

+ +

+ +
+ + +
+
Oct 15, 2024
+
+

+ 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

+
+ +
+
+ Check out the preprint. Here is a long demo: +

+
+ +
+
+
+ + + +


+ + + + +
+
Aug 24, 2024
+
+

+ 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

+
+ +
+
+ Check out the webpage, preprint, and code. Here is a long demo: +

+
+ +
+
+
+ + + +


+ +
+
Aug 19, 2024
+
+

+ Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? Our answer is a resounding YES! PROUD to share: 🔥🔥"Complementarity-Free Multi-Contact Modeling and Optimization,", our latest method that sets shattering benchmarks in various challenging dexterous manipulation tasks. +

+

"Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

+ Our method sets a new benchmark in dexterous manipulation: +
    +
  • 🎯 A 96.5% success rate across all tasks
  • +
  • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
  • +
  • 🚀 Model predictive control running at 50-100 Hz for all tasks
  • +
+
+ +
+
+ Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

+
+ +
+
+
+ + + +


+
+
July 9 2024
+
+

+ 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

+ Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
+ +
+
+ Check out the project website, preprint, and a breaf introduction vide below. +
+
+ +
+
+
+ + +
+ +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/0e/25c61d85d04565a70b150be835fb92d7d090e05b408466aa485828991d489c b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/0e/25c61d85d04565a70b150be835fb92d7d090e05b408466aa485828991d489c new file mode 100644 index 0000000..88af47c --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/0e/25c61d85d04565a70b150be835fb92d7d090e05b408466aa485828991d489c @@ -0,0 +1,402 @@ +I"Q + +

+ +

The IRIS lab focuses on three reserach directions: (1) human-autonomy alignment, (2) contact-rich dexterous manipulation, and (3) fundamental methods in robotics. Below are some recent publications in each set of research interest. +Please visit Publications page for a full list of publications.

+ +


+
+
Human-autonomy alignment
+
+ +We develop methods to empower a robot with the ability to efficiently understand and be understood by human users through a variety of physical interactions. We explore how robots can aptly respond to and collaborate meaningfully with users. + +
+
+
    +
  • Robot learning from general human interactions
  • +
  • Planning and control for human-robot systems
  • +
+
+ + + +
+ +
+
Language-Model-Assisted Bi-Level Programming for Reward Learning from Internet Videos
+
Harsh Mahesheka, Zhixian Xie, Zhaoran Wang, Wanxin Jin
+
arXiv preprint, 2024
+ +
+
+
+ + + +
+ +
+
Safe MPC Alignment with Human Directional Feedback
+
Zhixian Xie, Wenlong Zhang, Yi Ren, Zhaoran Wang, George. J. Pappas and Wanxin Jin
+
Submitted to IEEE Transactions on Robotics (T-RO), 2024
+ +
+
+
+ + + +
+ +
+
Learning from Human Directional Corrections
+
Wanxin Jin, Todd D Murphey, Zehui Lu, and Shaoshuai Mou
+
IEEE Transactions on Robotics (T-RO), 2023
+ +
+
+
+ + +
+ +
+
Learning from Sparse Demonstrations
+
Wanxin Jin, Todd D Murphey, Dana Kulic, Neta Ezer, and Shaoshuai Mou
+
IEEE Transactions on Robotics (T-RO), 2023
+ +
+
+
+ + + + +
+ +
+
Inverse Optimal Control from Incomplete Trajectory Observations
+
Wanxin Jin, Dana Kulic, Shaoshuai Mou, and Sandra Hirche
+
International Journal of Robotics Research (IJRR), 40:848–865, 2021
+ +
+
+
+ + + + +
+ +
+
Inverse Optimal Control for Multiphase cost functions
+
Wanxin Jin, Dana Kulic, Jonathan Lin, Shaoshuai Mou, and Sandra Hirche
+
IEEE Transactions on Robotics (T-RO), 35(6):1387–1398, 2019
+ +
+
+ + + +
+
+ +


+ +
+
Contact-rich manipulation
+
+ + +We aim to leverage physical principles to develop efficient representations or models for robot's physical interaction with environments. We also focus on developing algorithms to enable robots efficiently and robustly manipulate their surroundings/objects through contact. + +

+ +
    +
  • Learning, planning, and control for contact-rich manipulation
  • +
  • Computer vision and learnable geometry for dexterous manipulation
  • +
+
+ + + +
+ +
+
ContactSDF: Signed Distance Functions as Multi-Contact Models for Dexterous Manipulation
+
Wen Yang and Wanxin Jin
+
Submitted to IEEE Robotics and Automation Letters (RA-L), 2024
+ +
+
+
+ + +
+ +
+
Complementarity-Free Multi-Contact Modeling and Optimization for Dexterous Manipulation
+
Wanxin Jin
+
arXiv preprint, 2024
+ +
+
+
+ + + +
+ +
+
Task-Driven Hybrid Model Reduction for Dexterous Manipulation
+
Wanxin Jin and Michael Posa
+
IEEE Transactions on Robotics (T-RO), 2024
+ +
+
+
+ + + + +
+ +
+
Adaptive Contact-Implicit Model Predictive Control with Online Residual Learning
+
Wei-Cheng Huang, Alp Aydinoglu, Wanxin Jin, Michael Posa
+
IEEE International Conference on Robotics and Automation (ICRA), 2024
+ +
+
+
+ + + +
+ +
+
Adaptive Barrier Smoothing for First-Order Policy Gradient with Contact Dynamics
+
Shenao Zhang, Wanxin Jin, Zhaoran Wang
+
International Conference on Machine Learning (ICML), 2023
+ +
+
+
+ + + +
+ +
+
Learning Linear Complementarity Systems
+
Wanxin Jin, Alp Aydinoglu, Mathew Halm, and Michael Posa
+
Learning for Dynamics and Control (L4DC), 2022
+ +
+
+ + + +
+
+ +


+ +
+
Fundamental methods in robotics
+
+ +We focus on developing fundamental theories and algorithms for achieving efficient, safe, and robust robot intelligence. Our methods lie at the intersection of model-based (control and optimization) and data-driven approaches, harnessing the complementary benefits of both. + +

+ +
    +
  • Optimal control, motion plannig, reinforcement learning
  • +
  • Differentiable optimization, inverse optimization
  • +
  • Hybrid system learning and control
  • +
+
+ + + +
+ +
+
Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
+
Wanxin Jin, Zhaoran Wang, Zhuoran Yang, and Shaoshuai Mou
+
Advances in Neural Information Processing Systems (NeurIPS), 2020
+ +
+
+
+ + +
+ +
+
Safe Pontryagin Differentiable Programming
+
Wanxin Jin, Shaoshuai Mou, and George J. Pappas
+
Advances in Neural Information Processing Systems (NeurIPS), 2021
+ +
+
+
+ + +
+ +
+
Robust Safe Learning and Control in Unknown Environments: An Uncertainty-Aware Control Barrier Function Approach
+
Jiacheng Li, Qingchen Liu, Wanxin Jin, Jiahu Qin, and Sandra Hirche
+
IEEE Robotics and Automation Letters (RA-L), 2023
+ +
+
+
+ + + + +
+ +
+
Enforcing Hard Constraints with Soft Barriers: Safe-driven Reinforcement Learning in Unknown Stochastic Environments
+
Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu
+
International Conference on Machine Learning (ICML), 2023
+ +
+
+
+ + +
+ +
+
A Differential Dynamic Programming Framework for Inverse Reinforcement Learning
+
Kun Cao, Xinhang Xu, Wanxin Jin, Karl H. Johansson, and Lihua Xie
+
Submitted to IEEE Transactions on Robotics (T-RO), 2024
+ +
+
+ + +
+
+ + + + + + +

+ + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/29/e861df560fdb4a8a828958770d97e010f26036cbd4a48d2acd0469d67a9d1b b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/29/e861df560fdb4a8a828958770d97e010f26036cbd4a48d2acd0469d67a9d1b deleted file mode 100644 index 161dbcd..0000000 --- a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/29/e861df560fdb4a8a828958770d97e010f26036cbd4a48d2acd0469d67a9d1b +++ /dev/null @@ -1,612 +0,0 @@ -I"k - - - - - - - - -
-All -Human-robot alignment -Contact-rich manipulation -Fundamental methods -
- - - -

- - - -
-

2024

-
-
- -
- -
-
Language-Model-Assisted Bi-Level Programming for Reward Learning from Internet Videos
-
Harsh Mahesheka, Zhixian Xie, Zhaoran Wang, Wanxin Jin
-
arXiv preprint arXiv:2410.09286, 2024
- -
-
-
- - -
- -
-
ContactSDF: Signed Distance Functions as Multi-Contact Models for Dexterous Manipulation
-
Wen Yang and Wanxin Jin
-
Submitted to IEEE Robotics and Automation Letters (RA-L), 2024, 2024
- -
-
-
- - -
- -
-
Complementarity-Free Multi-Contact Modeling and Optimization for Dexterous Manipulation
-
Wanxin Jin
-
arXiv preprint, 2024
- -
-
-
- - -
- -
-
Safe MPC Alignment with Human Directional Feedback
-
Zhixian Xie, Wenlong Zhang, Yi Ren, Zhaoran Wang, George. J. Pappas and Wanxin Jin
-
Submitted to IEEE Transactions on Robotics (T-RO), 2024
- -
-
-
- - -
- -
-
A Differential Dynamic Programming Framework for Inverse Reinforcement Learning
-
Kun Cao, Xinhang Xu, Wanxin Jin, Karl H. Johansson, and Lihua Xie
-
Submitted to IEEE Transactions on Robotics (T-RO), 2024
- -
-
-
- - -
- -
-
D3G: Learning Multi-robot Coordination from Demonstrations
-
Yizhi Zhou, Wanxin Jin, Xuan Wang
-
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2024
- -
-
-
- - -
- -
-
TacTID: High-performance Visuo-Tactile Sensor-based Terrain Identification for Legged Robots
-
Ziwu Song, Chenchang Li, Zhentan Quan, Shilong Mu, Xiaosa Li, Ziyi Zhao, Wanxin Jin, Chenye Wu, Wenbo Ding, Xiao-Ping Zhang
-
IEEE Sensors Journal, 2024
-
- Paper - -
-
-
-
- - -
- -
-
How Can LLM Guide RL? A Value-Based Approach
-
Shenao Zhang, Sirui Zheng, Shuqi Ke, Zhihan Liu, Wanxin Jin, Jianbo Yuan, Yingxiang Yang, Hongxia Yang, Zhaoran Wang
-
arXiv preprint, 2024
-
- Paper - Code - -
-
-
-
- - -
- -
-
Task-Driven Hybrid Model Reduction for Dexterous Manipulation
-
Wanxin Jin and Michael Posa
-
IEEE Transactions on Robotics (T-RO), 2024
- -
-
-
- - - -
- -
-
Adaptive Contact-Implicit Model Predictive Control with Online Residual Learning
-
Wei-Cheng Huang, Alp Aydinoglu, Wanxin Jin, Michael Posa
-
IEEE International Conference on Robotics and Automation (ICRA), 2024
- -
-
-
- - - - - -

- - - -
-

2023

-
-
- - -
- -
-
Guaranteed Stabilization and Safety of Nonlinear Systems via Sliding Mode Control
-
Fan Ding, Jin Ke, Wanxin Jin, Jianping He, and Xiaoming Duan
-
IEEE Control Systems Letters, 2023
- -
-
-
- - - - - - - - -
- -
-
Adaptive Barrier Smoothing for First-Order Policy Gradient with Contact Dynamics
-
Shenao Zhang, Wanxin Jin, Zhaoran Wang
-
International Conference on Machine Learning (ICML), 2023
- -
-
-
- - - - -
- -
-
Enforcing Hard Constraints with Soft Barriers: Safe-driven Reinforcement Learning in Unknown Stochastic Environments
-
Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu
-
International Conference on Machine Learning (ICML), 2023
- -
-
-
- - - - -
- -
-
Robust Safe Learning and Control in Unknown Environments: An Uncertainty-Aware Control Barrier Function Approach
-
Jiacheng Li, Qingchen Liu, Wanxin Jin, Jiahu Qin, and Sandra Hirche
-
IEEE Robotics and Automation Letters (RA-L), 2023
- -
-
-
- - - -
- -
-
D3G: Learning Multi-robot Coordination from Demonstrations
-
Xuan Wang, YiZhi Zhou, and Wanxin Jin
-
IEEE International Conference on Intelligent Robots and Systems (IROS), 2023.
- -
-
-
- - - - - -
- -
-
Identifying Reaction-Aware Driving Styles of Stochastic Model Predictive Controlled Vehicles by Inverse Reinforcement Learning
-
Ni Dang, Tao Shi, Zengjie Zhang, Wanxin Jin, Marion Leibold, and Martin Buss
-
International Conference on Intelligent Transportation Systems (ITSC), 2023.
- -
-
-
- - - - - - - -

- - -
-

2022

-
-
- - -
- -
-
Learning from Human Directional Corrections
-
Wanxin Jin, Todd D Murphey, Zehui Lu, and Shaoshuai Mou
-
IEEE Transactions on Robotics (T-RO), 2023
- -
-
-
- - - - - -
- -
-
Learning from Sparse Demonstrations
-
Wanxin Jin, Todd D Murphey, Dana Kulic, Neta Ezer, and Shaoshuai Mou
-
IEEE Transactions on Robotics (T-RO), 2023
- -
-
-
- - - - - - -
- -
-
Learning Linear Complementarity Systems
-
Wanxin Jin, Alp Aydinoglu, Mathew Halm, and Michael Posa
-
Learning for Dynamics and Control (L4DC), 2022
- -
-
-
- - - - - - - -
- -
-
Cooperative Tuning of Multi-Agent Optimal Control Systems
-
Zehui Lu, Wanxin Jin, Shaoshuai Mou, Brian D. O. Anderson
-
IEEE Conference on Decision and Control (CDC), 2022
- -
-
-
- - - - - -

- - -
-

2021

-
-
- - -
- -
-
Inverse Optimal Control from Incomplete Trajectory Observations
-
Wanxin Jin, Dana Kulic, Shaoshuai Mou, and Sandra Hirche
-
International Journal of Robotics Research (IJRR), 40:848–865, 2021
- -
-
-
- - - -
- -
-
Safe Pontryagin Differentiable Programming
-
Wanxin Jin, Shaoshuai Mou, and George J. Pappas
-
Advances in Neural Information Processing Systems (NeurIPS), 2021
- -
-
-
- - - - - - -
- -
-
Distributed Inverse Optimal Control
-
Wanxin Jin and Shaoshuai Mou
-
Automatica, Volume 129, 2021
- -
-
-
- - - - - - - -
- -
-
Human-Automation Interaction for Assisting Novices to Emulate Experts by Inferring Task Objective Functions
-
Sooyung Byeon, Wanxin Jin, Dawei Sun, and Inseok Hwang
-
AIAA/IEEE 40th Digital Avionics Systems Conference (DASC) , 2021. Best Student Paper Finalist
- -
-
-
- - - - - - - - - -

- - -
-

2020

-
-
- - - -
- -
-
Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
-
Wanxin Jin, Zhaoran Wang, Zhuoran Yang, and Shaoshuai Mou
-
Advances in Neural Information Processing Systems (NeurIPS), 2020
- -
-
-
- - - - - - -

- - -
-

2019

-
-
- - - - -
- -
-
Inverse Optimal Control for Multiphase cost functions
-
Wanxin Jin, Dana Kulic, Jonathan Lin, Shaoshuai Mou, and Sandra Hirche
-
IEEE Transactions on Robotics (T-RO), 35(6):1387–1398, 2019
- -
-
-
- - - - - - - - - - - - - - - - - -:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/12/2ee68f507a6ea3e299d5a9a36ea75e461903b6697aa5cbf25e2e4aac01a7d3 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/2a/8a064e6e7ee5103484b988711d0f7ac14e01a063126ea8af5428bba512b537 similarity index 95% rename from .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/12/2ee68f507a6ea3e299d5a9a36ea75e461903b6697aa5cbf25e2e4aac01a7d3 rename to .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/2a/8a064e6e7ee5103484b988711d0f7ac14e01a063126ea8af5428bba512b537 index 8c2fad4..17d3345 100644 --- a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/12/2ee68f507a6ea3e299d5a9a36ea75e461903b6697aa5cbf25e2e4aac01a7d3 +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/2a/8a064e6e7ee5103484b988711d0f7ac14e01a063126ea8af5428bba512b537 @@ -1,8 +1,8 @@ -I"$

This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

+I"Z$

This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

  • -

    Human-robot alignment: We develop innovative methods that enable robots to seamlessly understand and communicate with humans through various physical interactions. Our work includes developing adaptive learning algorithms and intuitive control interfaces to enhance representation alignment between humans and robots.

    +

    Human-autonomy alignment: we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

  • Contact-rich manipulation: We develop advanced physics-based representations and frameworks that enable robots to interact with and manipulate physical objects efficiently and precisely. Our goal is to enhance robots’ capabilities in performing complex tasks, such as assembly and sorting, in unstructured environments.

    diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/3e/6db327fed4d4110f30198187d12b67353e9515b75cb0b6250807ca02a0eba0 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/3e/6db327fed4d4110f30198187d12b67353e9515b75cb0b6250807ca02a0eba0 new file mode 100644 index 0000000..73ba068 --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/3e/6db327fed4d4110f30198187d12b67353e9515b75cb0b6250807ca02a0eba0 @@ -0,0 +1,148 @@ +I"c$

    This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

    + +
      +
    • +

      Human-autonomy alignment: we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

      +
    • +
    • +

      Contact-rich dexterous manipulation: we develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects.

      +
    • +
    • +

      Fundamental methods for robot autonomy: We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches.

      +
    • +
    + +

    + +
    + +     + +     + +     + +     + +     + +
    + +


    + +

    Recent Updates

    + +

    + +
    + + +
    +
    Oct 15, 2024
    +
    +

    + 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

    +
    + +
    +
    + Check out the preprint. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + + + + +
    +
    Aug 24, 2024
    +
    +

    + 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

    +
    + +
    +
    + Check out the webpage, preprint, and code. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + +
    +
    Aug 19, 2024
    +
    +

    + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? Our answer is a resounding YES! PROUD to share: "Complementarity-Free Multi-Contact Modeling and Optimization,", our latest method that sets shattering benchmarks in various challenging dexterous manipulation tasks. +

    +

    🔥 Thrilled to share our work: "Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

    + Our method sets a new benchmark in dexterous manipulation: +
      +
    • 🎯 A 96.5% success rate across all tasks
    • +
    • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
    • +
    • 🚀 Model predictive control running at 50-100 Hz for all tasks
    • +
    +
    + +
    +
    + Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    +
    +
    July 9 2024
    +
    +

    + 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

    + Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
    + +
    +
    + Check out the project website, preprint, and a breaf introduction vide below. +
    +
    + +
    +
    +
    + + +
    + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/67/43d7a64a7111fb4f44eb62d3337d1cd231e2303e12fc32029098269afdf50e b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/67/43d7a64a7111fb4f44eb62d3337d1cd231e2303e12fc32029098269afdf50e new file mode 100644 index 0000000..ea55d08 --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/67/43d7a64a7111fb4f44eb62d3337d1cd231e2303e12fc32029098269afdf50e @@ -0,0 +1,148 @@ +I"#

    This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

    + +
      +
    • +

      Human-autonomy alignment: we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

      +
    • +
    • +

      Contact-rich dexterous manipulation: we develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects.

      +
    • +
    • +

      Fundamental methods for robot autonomy: We develop fundamental theories/algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based (control/optimization) and data-driven (machine learning & AI) approaches.

      +
    • +
    + +

    + +
    + +     + +     + +     + +     + +     + +
    + +


    + +

    Recent Updates

    + +

    + +
    + + +
    +
    Oct 15, 2024
    +
    +

    + 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

    +
    + +
    +
    + Check out the preprint. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + + + + +
    +
    Aug 24, 2024
    +
    +

    + 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

    +
    + +
    +
    + Check out the webpage, preprint, and code. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + +
    +
    Aug 19, 2024
    +
    +

    + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? YES! The key lies in our new "effective yet optimization-friendly multi-contact model." +

    +

    🔥 Thrilled to unveil our work: "Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

    + Our method sets a new benchmark in dexterous manipulation: +
      +
    • 🎯 A 96.5% success rate across all tasks
    • +
    • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
    • +
    • 🚀 Model predictive control running at 50-100 Hz for all tasks
    • +
    +
    + +
    +
    + Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    +
    +
    July 9 2024
    +
    +

    + 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

    + Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
    + +
    +
    + Check out the project website, preprint, and a breaf introduction vide below. +
    +
    + +
    +
    +
    + + +
    + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/70/b43bf0813a4f14020da0b6414450733746fa32a3575071b882909c92b1d184 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/70/b43bf0813a4f14020da0b6414450733746fa32a3575071b882909c92b1d184 deleted file mode 100644 index f533258..0000000 --- a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/70/b43bf0813a4f14020da0b6414450733746fa32a3575071b882909c92b1d184 +++ /dev/null @@ -1,612 +0,0 @@ -I"k - - - - - - - - -
    -All -Human-robot alignment -Contact-rich manipulation -Fundamental methods -
    - - - -

    - - - -
    -

    2024

    -
    -
    - -
    - -
    -
    Language-Model-Assisted Bi-Level Programming for Reward Learning from Internet Videos
    -
    Harsh Mahesheka, Zhixian Xie, Zhaoran Wang, Wanxin Jin
    -
    arXiv preprint arXiv:2410.09286, 2024
    - -
    -
    -
    - - -
    - -
    -
    ContactSDF: Signed Distance Functions as Multi-Contact Models for Dexterous Manipulation
    -
    Wen Yang and Wanxin Jin
    -
    Submitted to IEEE Robotics and Automation Letters (T-RO), 2024, 2024
    - -
    -
    -
    - - -
    - -
    -
    Complementarity-Free Multi-Contact Modeling and Optimization for Dexterous Manipulation
    -
    Wanxin Jin
    -
    arXiv preprint, 2024
    - -
    -
    -
    - - -
    - -
    -
    Safe MPC Alignment with Human Directional Feedback
    -
    Zhixian Xie, Wenlong Zhang, Yi Ren, Zhaoran Wang, George. J. Pappas and Wanxin Jin
    -
    Submitted to IEEE Transactions on Robotics (T-RO), 2024
    - -
    -
    -
    - - -
    - -
    -
    A Differential Dynamic Programming Framework for Inverse Reinforcement Learning
    -
    Kun Cao, Xinhang Xu, Wanxin Jin, Karl H. Johansson, and Lihua Xie
    -
    Submitted to IEEE Transactions on Robotics (T-RO), 2024
    - -
    -
    -
    - - -
    - -
    -
    D3G: Learning Multi-robot Coordination from Demonstrations
    -
    Yizhi Zhou, Wanxin Jin, Xuan Wang
    -
    IEEE/RSJ International Conference on Intelligent Robots and Systems, 2024
    - -
    -
    -
    - - -
    - -
    -
    TacTID: High-performance Visuo-Tactile Sensor-based Terrain Identification for Legged Robots
    -
    Ziwu Song, Chenchang Li, Zhentan Quan, Shilong Mu, Xiaosa Li, Ziyi Zhao, Wanxin Jin, Chenye Wu, Wenbo Ding, Xiao-Ping Zhang
    -
    IEEE Sensors Journal, 2024
    -
    - Paper - -
    -
    -
    -
    - - -
    - -
    -
    How Can LLM Guide RL? A Value-Based Approach
    -
    Shenao Zhang, Sirui Zheng, Shuqi Ke, Zhihan Liu, Wanxin Jin, Jianbo Yuan, Yingxiang Yang, Hongxia Yang, Zhaoran Wang
    -
    arXiv preprint, 2024
    -
    - Paper - Code - -
    -
    -
    -
    - - -
    - -
    -
    Task-Driven Hybrid Model Reduction for Dexterous Manipulation
    -
    Wanxin Jin and Michael Posa
    -
    IEEE Transactions on Robotics (T-RO), 2024
    - -
    -
    -
    - - - -
    - -
    -
    Adaptive Contact-Implicit Model Predictive Control with Online Residual Learning
    -
    Wei-Cheng Huang, Alp Aydinoglu, Wanxin Jin, Michael Posa
    -
    IEEE International Conference on Robotics and Automation (ICRA), 2024
    - -
    -
    -
    - - - - - -

    - - - -
    -

    2023

    -
    -
    - - -
    - -
    -
    Guaranteed Stabilization and Safety of Nonlinear Systems via Sliding Mode Control
    -
    Fan Ding, Jin Ke, Wanxin Jin, Jianping He, and Xiaoming Duan
    -
    IEEE Control Systems Letters, 2023
    - -
    -
    -
    - - - - - - - - -
    - -
    -
    Adaptive Barrier Smoothing for First-Order Policy Gradient with Contact Dynamics
    -
    Shenao Zhang, Wanxin Jin, Zhaoran Wang
    -
    International Conference on Machine Learning (ICML), 2023
    - -
    -
    -
    - - - - -
    - -
    -
    Enforcing Hard Constraints with Soft Barriers: Safe-driven Reinforcement Learning in Unknown Stochastic Environments
    -
    Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu
    -
    International Conference on Machine Learning (ICML), 2023
    - -
    -
    -
    - - - - -
    - -
    -
    Robust Safe Learning and Control in Unknown Environments: An Uncertainty-Aware Control Barrier Function Approach
    -
    Jiacheng Li, Qingchen Liu, Wanxin Jin, Jiahu Qin, and Sandra Hirche
    -
    IEEE Robotics and Automation Letters (RA-L), 2023
    - -
    -
    -
    - - - -
    - -
    -
    D3G: Learning Multi-robot Coordination from Demonstrations
    -
    Xuan Wang, YiZhi Zhou, and Wanxin Jin
    -
    IEEE International Conference on Intelligent Robots and Systems (IROS), 2023.
    - -
    -
    -
    - - - - - -
    - -
    -
    Identifying Reaction-Aware Driving Styles of Stochastic Model Predictive Controlled Vehicles by Inverse Reinforcement Learning
    -
    Ni Dang, Tao Shi, Zengjie Zhang, Wanxin Jin, Marion Leibold, and Martin Buss
    -
    International Conference on Intelligent Transportation Systems (ITSC), 2023.
    - -
    -
    -
    - - - - - - - -

    - - -
    -

    2022

    -
    -
    - - -
    - -
    -
    Learning from Human Directional Corrections
    -
    Wanxin Jin, Todd D Murphey, Zehui Lu, and Shaoshuai Mou
    -
    IEEE Transactions on Robotics (T-RO), 2023
    - -
    -
    -
    - - - - - -
    - -
    -
    Learning from Sparse Demonstrations
    -
    Wanxin Jin, Todd D Murphey, Dana Kulic, Neta Ezer, and Shaoshuai Mou
    -
    IEEE Transactions on Robotics (T-RO), 2023
    - -
    -
    -
    - - - - - - -
    - -
    -
    Learning Linear Complementarity Systems
    -
    Wanxin Jin, Alp Aydinoglu, Mathew Halm, and Michael Posa
    -
    Learning for Dynamics and Control (L4DC), 2022
    - -
    -
    -
    - - - - - - - -
    - -
    -
    Cooperative Tuning of Multi-Agent Optimal Control Systems
    -
    Zehui Lu, Wanxin Jin, Shaoshuai Mou, Brian D. O. Anderson
    -
    IEEE Conference on Decision and Control (CDC), 2022
    - -
    -
    -
    - - - - - -

    - - -
    -

    2021

    -
    -
    - - -
    - -
    -
    Inverse Optimal Control from Incomplete Trajectory Observations
    -
    Wanxin Jin, Dana Kulic, Shaoshuai Mou, and Sandra Hirche
    -
    International Journal of Robotics Research (IJRR), 40:848–865, 2021
    - -
    -
    -
    - - - -
    - -
    -
    Safe Pontryagin Differentiable Programming
    -
    Wanxin Jin, Shaoshuai Mou, and George J. Pappas
    -
    Advances in Neural Information Processing Systems (NeurIPS), 2021
    - -
    -
    -
    - - - - - - -
    - -
    -
    Distributed Inverse Optimal Control
    -
    Wanxin Jin and Shaoshuai Mou
    -
    Automatica, Volume 129, 2021
    - -
    -
    -
    - - - - - - - -
    - -
    -
    Human-Automation Interaction for Assisting Novices to Emulate Experts by Inferring Task Objective Functions
    -
    Sooyung Byeon, Wanxin Jin, Dawei Sun, and Inseok Hwang
    -
    AIAA/IEEE 40th Digital Avionics Systems Conference (DASC) , 2021. Best Student Paper Finalist
    - -
    -
    -
    - - - - - - - - - -

    - - -
    -

    2020

    -
    -
    - - - -
    - -
    -
    Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
    -
    Wanxin Jin, Zhaoran Wang, Zhuoran Yang, and Shaoshuai Mou
    -
    Advances in Neural Information Processing Systems (NeurIPS), 2020
    - -
    -
    -
    - - - - - - -

    - - -
    -

    2019

    -
    -
    - - - - -
    - -
    -
    Inverse Optimal Control for Multiphase cost functions
    -
    Wanxin Jin, Dana Kulic, Jonathan Lin, Shaoshuai Mou, and Sandra Hirche
    -
    IEEE Transactions on Robotics (T-RO), 35(6):1387–1398, 2019
    - -
    -
    -
    - - - - - - - - - - - - - - - - - -:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/7b/334601b0d0399914c8620067658bfbd690954018d85949fd8c244233b6f24c b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/7b/334601b0d0399914c8620067658bfbd690954018d85949fd8c244233b6f24c new file mode 100644 index 0000000..955fca1 --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/7b/334601b0d0399914c8620067658bfbd690954018d85949fd8c244233b6f24c @@ -0,0 +1,148 @@ +I"A$

    This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

    + +
      +
    • +

      Human-autonomy alignment: We develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

      +
    • +
    • +

      Contact-rich dexterous manipulation: We develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects.

      +
    • +
    • +

      Fundamental computational methods: We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches.

      +
    • +
    + +

    + +
    + +     + +     + +     + +     + +     + +
    + +


    + +

    Recent Updates

    + +

    + +
    + + +
    +
    Oct 15, 2024
    +
    +

    + 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

    +
    + +
    +
    + Check out the preprint. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + + + + +
    +
    Aug 24, 2024
    +
    +

    + 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

    +
    + +
    +
    + Check out the webpage, preprint, and code. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + +
    +
    Aug 19, 2024
    +
    +

    + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? Our answer is a resounding YES! PROUD to share: 🔥🔥"Complementarity-Free Multi-Contact Modeling and Optimization,", our latest method that sets shattering benchmarks in various challenging dexterous manipulation tasks. +

    +

    "Complementarity-Free Multi-Contact Modeling and Optimization," consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

    + Our method sets a new benchmark in dexterous manipulation: +
      +
    • 🎯 A 96.5% success rate across all tasks
    • +
    • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
    • +
    • 🚀 Model predictive control running at 50-100 Hz for all tasks
    • +
    +
    + +
    +
    + Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    +
    +
    July 9 2024
    +
    +

    + 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

    + Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
    + +
    +
    + Check out the project website, preprint, and a breaf introduction vide below. +
    +
    + +
    +
    +
    + + +
    + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/1e/843110d55507591d160cc2f9c3a49bc237a3aecfde8d02c4f20e93cf4b1f56 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/81/38a47f3f76da7c9e435c0a024970e8d5aa4c00cf874c076c3be566df600423 similarity index 98% rename from .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/1e/843110d55507591d160cc2f9c3a49bc237a3aecfde8d02c4f20e93cf4b1f56 rename to .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/81/38a47f3f76da7c9e435c0a024970e8d5aa4c00cf874c076c3be566df600423 index f6d242f..9cbb132 100644 --- a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/1e/843110d55507591d160cc2f9c3a49bc237a3aecfde8d02c4f20e93cf4b1f56 +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/81/38a47f3f76da7c9e435c0a024970e8d5aa4c00cf874c076c3be566df600423 @@ -1,8 +1,8 @@ -I"kQ +I"Q

    -

    The IRIS lab focuses on three reserach directions: (1) human-robot alignment, (2) contact-rich manipulation, and (3) fundamental methods in robotics. Below are some recent publications in each set of research interest. +

    The IRIS lab focuses on three reserach directions: (1) human-autonomy alignment, (2) contact-rich dexterous manipulation, and (3) fundamental methods in robotics. Below are some recent publications in each set of research interest. Please visit Publications page for a full list of publications.


    @@ -27,7 +27,7 @@ We develop methods to empower a robot with the ability to efficiently understand
    Language-Model-Assisted Bi-Level Programming for Reward Learning from Internet Videos
    Harsh Mahesheka, Zhixian Xie, Zhaoran Wang, Wanxin Jin
    -
    arXiv preprint arXiv:2410.09286, 2024
    +
    arXiv preprint, 2024
    Paper Video @@ -149,7 +149,7 @@ We aim to leverage physical principles to develop efficient representations or
    ContactSDF: Signed Distance Functions as Multi-Contact Models for Dexterous Manipulation
    Wen Yang and Wanxin Jin
    -
    arXiv preprint, 2024
    +
    Submitted to IEEE Robotics and Automation Letters (RA-L), 2024
    Webpage Paper diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/86/ca7b3b0236b8d4cb59d30511bbf2c30bce8213c84d1271ba61af98c23ced7a b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/86/ca7b3b0236b8d4cb59d30511bbf2c30bce8213c84d1271ba61af98c23ced7a deleted file mode 100644 index 4b4e391..0000000 --- a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/86/ca7b3b0236b8d4cb59d30511bbf2c30bce8213c84d1271ba61af98c23ced7a +++ /dev/null @@ -1,79 +0,0 @@ -I"

    - -

    For General Students at ASU

    - -

    - -

    We are actively looking for self-motivated, passionate, and dedicated undergraduate and graduate students to join our lab. Ideal candidates should possess a background and project experience in one or more of the following areas:

    - -
      -
    • Dynamics and controls
    • -
    • Robot modeling, control, and planning
    • -
    • Machine learning
    • -
    • Computer vision
    • -
    • Optimization
    • -
    - -

    If you are interested, please submit your application here. We will contact you if we find that your background aligns with the research directions of the IRIS lab.

    - -

    -
    - -

    - -

    Ph.D. Positions

    - -

    - -

    The IRIS lab is constantly seeking outstanding Ph.D. applicants interested in one of the following research topics:

    - -
    Research Topics
    -
      -
    • Robot learning with humans
    • -
    • Learning and control for robot manipulation
    • -
    • Fundamental research in control, machine learning, and optimization
    • -
    - -

    - -
    Basic Requirement
    -
      -
    • Strong passion and desire to solve challenging problems
    • -
    • Master degree in ME, EE, or CS, with outstanding academic performance
    • -
    • Strong analytical and coding skills
    • -
    • Good communication skills in both written and spoken English
    • -
    - -

    - -
    Desired Skills
    -
      -
    • Knowledge of controls, planning, dynamics, or/and data-driven methods
    • -
    • Knowledge of optimization and linear algebra
    • -
    • Experience in computer vision
    • -
    • Experience in Python and C++
    • -
    - -

    - -
    How to Apply (applying for one of two programs below)
    - -

    - -
      -
    • -

      Apply for ME PhD Program: Interested applicants can apply for ASU Mechanical Engineering PhD Program via https://semte.engineering.asu.edu/mechanical-graduate/ and indicate Dr. Wanxin Jin as your prospective supervisor in your statement of purpose and application form.

      -
    • -
    • -

      Apply for EE PhD Program Interested applicants can also apply for ASU Electrical Engineering PhD Program via https://degrees.apps.asu.edu/masters-phd/major/ASU00/ESEEPHD/electrical-engineering-phd and indicate Dr. Wanxin Jin as your prospective supervisor in your statement of purpose and application form.

      -
    • -
    • -

      Note that ASU ME PhD Program has the Deficiency Policy, which requires any non-ME undergraduate background student, admitted in ME PhD program, to take ME undergraduate courses (usually 3 courses) in addition to their graduate course requirement. Therefore, any student with non-ME undergraduate backgrounds is encouarged to apply for EE PhD Program in order to join my group and avoid the Deficiency Policy.

      -
    • -
    - -

    -
    - -

    -:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/89/751d124f0ebbcf8341cdd9d963dd40bd1dc8e7b93b59e7408f79736a7610f4 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/89/751d124f0ebbcf8341cdd9d963dd40bd1dc8e7b93b59e7408f79736a7610f4 new file mode 100644 index 0000000..be065e9 --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/89/751d124f0ebbcf8341cdd9d963dd40bd1dc8e7b93b59e7408f79736a7610f4 @@ -0,0 +1,148 @@ +I"#

    This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

    + +
      +
    • +

      Human-autonomy alignment: we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

      +
    • +
    • +

      Contact-rich manipulation: we develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects.

      +
    • +
    • +

      Fundamental methods for robot autonomy: We develop fundamental theories/algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based (control/optimization) and data-driven (machine learning & AI) approaches.

      +
    • +
    + +

    + +
    + +     + +     + +     + +     + +     + +
    + +


    + +

    Recent Updates

    + +

    + +
    + + +
    +
    Oct 15, 2024
    +
    +

    + 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

    +
    + +
    +
    + Check out the preprint. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + + + + +
    +
    Aug 24, 2024
    +
    +

    + 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

    +
    + +
    +
    + Check out the webpage, preprint, and code. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + +
    +
    Aug 19, 2024
    +
    +

    + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? YES! The key lies in our new "effective yet optimization-friendly multi-contact model." +

    +

    🔥 Thrilled to unveil our work: "Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

    + Our method sets a new benchmark in dexterous manipulation: +
      +
    • 🎯 A 96.5% success rate across all tasks
    • +
    • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
    • +
    • 🚀 Model predictive control running at 50-100 Hz for all tasks
    • +
    +
    + +
    +
    + Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    +
    +
    July 9 2024
    +
    +

    + 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

    + Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
    + +
    +
    + Check out the project website, preprint, and a breaf introduction vide below. +
    +
    + +
    +
    +
    + + +
    + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/8e/cbc5bfb6e5f95f5747035b735eb0aca6cce3b2e455037d70e4d988ec25065b b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/8e/cbc5bfb6e5f95f5747035b735eb0aca6cce3b2e455037d70e4d988ec25065b new file mode 100644 index 0000000..c35cfd9 --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/8e/cbc5bfb6e5f95f5747035b735eb0aca6cce3b2e455037d70e4d988ec25065b @@ -0,0 +1,402 @@ +I"P + +

    + +

    The IRIS lab focuses on three reserach directions: (1) human-autonomy alignment, (2) contact-rich dexterous manipulation, and (3) fundamental methods in robotics. Below are some recent publications in each set of research interest. +Please visit Publications page for a full list of publications.

    + +


    +
    +
    Human-autonomy alignment
    +
    + +we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions. + +
    +
    +
      +
    • Robot learning from general human interactions
    • +
    • Planning and control for human-robot systems
    • +
    +
    + + + +
    + +
    +
    Language-Model-Assisted Bi-Level Programming for Reward Learning from Internet Videos
    +
    Harsh Mahesheka, Zhixian Xie, Zhaoran Wang, Wanxin Jin
    +
    arXiv preprint, 2024
    + +
    +
    +
    + + + +
    + +
    +
    Safe MPC Alignment with Human Directional Feedback
    +
    Zhixian Xie, Wenlong Zhang, Yi Ren, Zhaoran Wang, George. J. Pappas and Wanxin Jin
    +
    Submitted to IEEE Transactions on Robotics (T-RO), 2024
    + +
    +
    +
    + + + +
    + +
    +
    Learning from Human Directional Corrections
    +
    Wanxin Jin, Todd D Murphey, Zehui Lu, and Shaoshuai Mou
    +
    IEEE Transactions on Robotics (T-RO), 2023
    + +
    +
    +
    + + +
    + +
    +
    Learning from Sparse Demonstrations
    +
    Wanxin Jin, Todd D Murphey, Dana Kulic, Neta Ezer, and Shaoshuai Mou
    +
    IEEE Transactions on Robotics (T-RO), 2023
    + +
    +
    +
    + + + + +
    + +
    +
    Inverse Optimal Control from Incomplete Trajectory Observations
    +
    Wanxin Jin, Dana Kulic, Shaoshuai Mou, and Sandra Hirche
    +
    International Journal of Robotics Research (IJRR), 40:848–865, 2021
    + +
    +
    +
    + + + + +
    + +
    +
    Inverse Optimal Control for Multiphase cost functions
    +
    Wanxin Jin, Dana Kulic, Jonathan Lin, Shaoshuai Mou, and Sandra Hirche
    +
    IEEE Transactions on Robotics (T-RO), 35(6):1387–1398, 2019
    + +
    +
    + + + +
    +
    + +


    + +
    +
    Contact-rich manipulation
    +
    + + +We aim to leverage physical principles to develop efficient representations or models for robot's physical interaction with environments. We also focus on developing algorithms to enable robots efficiently and robustly manipulate their surroundings/objects through contact. + +

    + +
      +
    • Learning, planning, and control for contact-rich manipulation
    • +
    • Computer vision and learnable geometry for dexterous manipulation
    • +
    +
    + + + +
    + +
    +
    ContactSDF: Signed Distance Functions as Multi-Contact Models for Dexterous Manipulation
    +
    Wen Yang and Wanxin Jin
    +
    Submitted to IEEE Robotics and Automation Letters (RA-L), 2024
    + +
    +
    +
    + + +
    + +
    +
    Complementarity-Free Multi-Contact Modeling and Optimization for Dexterous Manipulation
    +
    Wanxin Jin
    +
    arXiv preprint, 2024
    + +
    +
    +
    + + + +
    + +
    +
    Task-Driven Hybrid Model Reduction for Dexterous Manipulation
    +
    Wanxin Jin and Michael Posa
    +
    IEEE Transactions on Robotics (T-RO), 2024
    + +
    +
    +
    + + + + +
    + +
    +
    Adaptive Contact-Implicit Model Predictive Control with Online Residual Learning
    +
    Wei-Cheng Huang, Alp Aydinoglu, Wanxin Jin, Michael Posa
    +
    IEEE International Conference on Robotics and Automation (ICRA), 2024
    + +
    +
    +
    + + + +
    + +
    +
    Adaptive Barrier Smoothing for First-Order Policy Gradient with Contact Dynamics
    +
    Shenao Zhang, Wanxin Jin, Zhaoran Wang
    +
    International Conference on Machine Learning (ICML), 2023
    + +
    +
    +
    + + + +
    + +
    +
    Learning Linear Complementarity Systems
    +
    Wanxin Jin, Alp Aydinoglu, Mathew Halm, and Michael Posa
    +
    Learning for Dynamics and Control (L4DC), 2022
    + +
    +
    + + + +
    +
    + +


    + +
    +
    Fundamental methods in robotics
    +
    + +We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches. + +

    + +
      +
    • Optimal control, motion plannig, reinforcement learning
    • +
    • Differentiable optimization, inverse optimization
    • +
    • Hybrid system learning and control
    • +
    +
    + + + +
    + +
    +
    Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
    +
    Wanxin Jin, Zhaoran Wang, Zhuoran Yang, and Shaoshuai Mou
    +
    Advances in Neural Information Processing Systems (NeurIPS), 2020
    + +
    +
    +
    + + +
    + +
    +
    Safe Pontryagin Differentiable Programming
    +
    Wanxin Jin, Shaoshuai Mou, and George J. Pappas
    +
    Advances in Neural Information Processing Systems (NeurIPS), 2021
    + +
    +
    +
    + + +
    + +
    +
    Robust Safe Learning and Control in Unknown Environments: An Uncertainty-Aware Control Barrier Function Approach
    +
    Jiacheng Li, Qingchen Liu, Wanxin Jin, Jiahu Qin, and Sandra Hirche
    +
    IEEE Robotics and Automation Letters (RA-L), 2023
    + +
    +
    +
    + + + + +
    + +
    +
    Enforcing Hard Constraints with Soft Barriers: Safe-driven Reinforcement Learning in Unknown Stochastic Environments
    +
    Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu
    +
    International Conference on Machine Learning (ICML), 2023
    + +
    +
    +
    + + +
    + +
    +
    A Differential Dynamic Programming Framework for Inverse Reinforcement Learning
    +
    Kun Cao, Xinhang Xu, Wanxin Jin, Karl H. Johansson, and Lihua Xie
    +
    Submitted to IEEE Transactions on Robotics (T-RO), 2024
    + +
    +
    + + +
    +
    + + + + + + +

    + + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/ce/236edb8bfa494aa84140ec02bb3ffe7631a2b6bfcedd06c0dab9e87a821e9d b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/94/50f97824074972e40f9eea53a626f44c4b7e4b01d18aabddb75d3127113a58 similarity index 99% rename from .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/ce/236edb8bfa494aa84140ec02bb3ffe7631a2b6bfcedd06c0dab9e87a821e9d rename to .jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/94/50f97824074972e40f9eea53a626f44c4b7e4b01d18aabddb75d3127113a58 index c5aa52c..da22919 100644 --- a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/ce/236edb8bfa494aa84140ec02bb3ffe7631a2b6bfcedd06c0dab9e87a821e9d +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/94/50f97824074972e40f9eea53a626f44c4b7e4b01d18aabddb75d3127113a58 @@ -1,4 +1,4 @@ -I"k +I"k - - - - - - - -
    -All -Human-robot alignment -Contact-rich manipulation -Fundamental methods -
    - - - -

    - - - -
    -

    2024

    -
    -
    - -
    - -
    -
    Language-Model-Assisted Bi-Level Programming for Reward Learning from Internet Videos
    -
    Harsh Mahesheka, Zhixian Xie, Zhaoran Wang, Wanxin Jin
    -
    arXiv preprint arXiv:2410.09286, 2024
    - -
    -
    -
    - - -
    - -
    -
    ContactSDF: Signed Distance Functions as Multi-Contact Models for Dexterous Manipulation
    -
    Wen Yang and Wanxin Jin
    -
    Submitted to IEEE Robotics and Automation Letters (RA-L), 2024
    - -
    -
    -
    - - -
    - -
    -
    Complementarity-Free Multi-Contact Modeling and Optimization for Dexterous Manipulation
    -
    Wanxin Jin
    -
    arXiv preprint, 2024
    - -
    -
    -
    - - -
    - -
    -
    Safe MPC Alignment with Human Directional Feedback
    -
    Zhixian Xie, Wenlong Zhang, Yi Ren, Zhaoran Wang, George. J. Pappas and Wanxin Jin
    -
    Submitted to IEEE Transactions on Robotics (T-RO), 2024
    - -
    -
    -
    - - -
    - -
    -
    A Differential Dynamic Programming Framework for Inverse Reinforcement Learning
    -
    Kun Cao, Xinhang Xu, Wanxin Jin, Karl H. Johansson, and Lihua Xie
    -
    Submitted to IEEE Transactions on Robotics (T-RO), 2024
    - -
    -
    -
    - - -
    - -
    -
    D3G: Learning Multi-robot Coordination from Demonstrations
    -
    Yizhi Zhou, Wanxin Jin, Xuan Wang
    -
    IEEE/RSJ International Conference on Intelligent Robots and Systems, 2024
    - -
    -
    -
    - - -
    - -
    -
    TacTID: High-performance Visuo-Tactile Sensor-based Terrain Identification for Legged Robots
    -
    Ziwu Song, Chenchang Li, Zhentan Quan, Shilong Mu, Xiaosa Li, Ziyi Zhao, Wanxin Jin, Chenye Wu, Wenbo Ding, Xiao-Ping Zhang
    -
    IEEE Sensors Journal, 2024
    -
    - Paper - -
    -
    -
    -
    - - -
    - -
    -
    How Can LLM Guide RL? A Value-Based Approach
    -
    Shenao Zhang, Sirui Zheng, Shuqi Ke, Zhihan Liu, Wanxin Jin, Jianbo Yuan, Yingxiang Yang, Hongxia Yang, Zhaoran Wang
    -
    arXiv preprint, 2024
    -
    - Paper - Code - -
    -
    -
    -
    - - -
    - -
    -
    Task-Driven Hybrid Model Reduction for Dexterous Manipulation
    -
    Wanxin Jin and Michael Posa
    -
    IEEE Transactions on Robotics (T-RO), 2024
    - -
    -
    -
    - - - -
    - -
    -
    Adaptive Contact-Implicit Model Predictive Control with Online Residual Learning
    -
    Wei-Cheng Huang, Alp Aydinoglu, Wanxin Jin, Michael Posa
    -
    IEEE International Conference on Robotics and Automation (ICRA), 2024
    - -
    -
    -
    - - - - - -

    - - - -
    -

    2023

    -
    -
    - - -
    - -
    -
    Guaranteed Stabilization and Safety of Nonlinear Systems via Sliding Mode Control
    -
    Fan Ding, Jin Ke, Wanxin Jin, Jianping He, and Xiaoming Duan
    -
    IEEE Control Systems Letters, 2023
    - -
    -
    -
    - - - - - - - - -
    - -
    -
    Adaptive Barrier Smoothing for First-Order Policy Gradient with Contact Dynamics
    -
    Shenao Zhang, Wanxin Jin, Zhaoran Wang
    -
    International Conference on Machine Learning (ICML), 2023
    - -
    -
    -
    - - - - -
    - -
    -
    Enforcing Hard Constraints with Soft Barriers: Safe-driven Reinforcement Learning in Unknown Stochastic Environments
    -
    Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu
    -
    International Conference on Machine Learning (ICML), 2023
    - -
    -
    -
    - - - - -
    - -
    -
    Robust Safe Learning and Control in Unknown Environments: An Uncertainty-Aware Control Barrier Function Approach
    -
    Jiacheng Li, Qingchen Liu, Wanxin Jin, Jiahu Qin, and Sandra Hirche
    -
    IEEE Robotics and Automation Letters (RA-L), 2023
    - -
    -
    -
    - - - -
    - -
    -
    D3G: Learning Multi-robot Coordination from Demonstrations
    -
    Xuan Wang, YiZhi Zhou, and Wanxin Jin
    -
    IEEE International Conference on Intelligent Robots and Systems (IROS), 2023.
    - -
    -
    -
    - - - - - -
    - -
    -
    Identifying Reaction-Aware Driving Styles of Stochastic Model Predictive Controlled Vehicles by Inverse Reinforcement Learning
    -
    Ni Dang, Tao Shi, Zengjie Zhang, Wanxin Jin, Marion Leibold, and Martin Buss
    -
    International Conference on Intelligent Transportation Systems (ITSC), 2023.
    - -
    -
    -
    - - - - - - - -

    - - -
    -

    2022

    -
    -
    - - -
    - -
    -
    Learning from Human Directional Corrections
    -
    Wanxin Jin, Todd D Murphey, Zehui Lu, and Shaoshuai Mou
    -
    IEEE Transactions on Robotics (T-RO), 2023
    - -
    -
    -
    - - - - - -
    - -
    -
    Learning from Sparse Demonstrations
    -
    Wanxin Jin, Todd D Murphey, Dana Kulic, Neta Ezer, and Shaoshuai Mou
    -
    IEEE Transactions on Robotics (T-RO), 2023
    - -
    -
    -
    - - - - - - -
    - -
    -
    Learning Linear Complementarity Systems
    -
    Wanxin Jin, Alp Aydinoglu, Mathew Halm, and Michael Posa
    -
    Learning for Dynamics and Control (L4DC), 2022
    - -
    -
    -
    - - - - - - - -
    - -
    -
    Cooperative Tuning of Multi-Agent Optimal Control Systems
    -
    Zehui Lu, Wanxin Jin, Shaoshuai Mou, Brian D. O. Anderson
    -
    IEEE Conference on Decision and Control (CDC), 2022
    - -
    -
    -
    - - - - - -

    - - -
    -

    2021

    -
    -
    - - -
    - -
    -
    Inverse Optimal Control from Incomplete Trajectory Observations
    -
    Wanxin Jin, Dana Kulic, Shaoshuai Mou, and Sandra Hirche
    -
    International Journal of Robotics Research (IJRR), 40:848–865, 2021
    - -
    -
    -
    - - - -
    - -
    -
    Safe Pontryagin Differentiable Programming
    -
    Wanxin Jin, Shaoshuai Mou, and George J. Pappas
    -
    Advances in Neural Information Processing Systems (NeurIPS), 2021
    - -
    -
    -
    - - - - - - -
    - -
    -
    Distributed Inverse Optimal Control
    -
    Wanxin Jin and Shaoshuai Mou
    -
    Automatica, Volume 129, 2021
    - -
    -
    -
    - - - - - - - -
    - -
    -
    Human-Automation Interaction for Assisting Novices to Emulate Experts by Inferring Task Objective Functions
    -
    Sooyung Byeon, Wanxin Jin, Dawei Sun, and Inseok Hwang
    -
    AIAA/IEEE 40th Digital Avionics Systems Conference (DASC) , 2021. Best Student Paper Finalist
    - -
    -
    -
    - - - - - - - - - -

    - - -
    -

    2020

    -
    -
    - - - -
    - -
    -
    Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
    -
    Wanxin Jin, Zhaoran Wang, Zhuoran Yang, and Shaoshuai Mou
    -
    Advances in Neural Information Processing Systems (NeurIPS), 2020
    - -
    -
    -
    - - - - - - -

    - - -
    -

    2019

    -
    -
    - - - - -
    - -
    -
    Inverse Optimal Control for Multiphase cost functions
    -
    Wanxin Jin, Dana Kulic, Jonathan Lin, Shaoshuai Mou, and Sandra Hirche
    -
    IEEE Transactions on Robotics (T-RO), 35(6):1387–1398, 2019
    - -
    -
    -
    - - - - - - - - - - - - - - - - - -:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/b3/4a2a3a4942cf4ba492fe8194c01b492ddee8c4b171760b769fe803f89ab964 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/b3/4a2a3a4942cf4ba492fe8194c01b492ddee8c4b171760b769fe803f89ab964 new file mode 100644 index 0000000..0a0c72c --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/b3/4a2a3a4942cf4ba492fe8194c01b492ddee8c4b171760b769fe803f89ab964 @@ -0,0 +1,148 @@ +I"A$

    This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

    + +
      +
    • +

      Human-autonomy alignment: we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

      +
    • +
    • +

      Contact-rich dexterous manipulation: we develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects.

      +
    • +
    • +

      Fundamental computational methods: We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches.

      +
    • +
    + +

    + +
    + +     + +     + +     + +     + +     + +
    + +


    + +

    Recent Updates

    + +

    + +
    + + +
    +
    Oct 15, 2024
    +
    +

    + 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

    +
    + +
    +
    + Check out the preprint. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + + + + +
    +
    Aug 24, 2024
    +
    +

    + 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

    +
    + +
    +
    + Check out the webpage, preprint, and code. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + +
    +
    Aug 19, 2024
    +
    +

    + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? Our answer is a resounding YES! PROUD to share: 🔥🔥"Complementarity-Free Multi-Contact Modeling and Optimization,", our latest method that sets shattering benchmarks in various challenging dexterous manipulation tasks. +

    +

    "Complementarity-Free Multi-Contact Modeling and Optimization," consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

    + Our method sets a new benchmark in dexterous manipulation: +
      +
    • 🎯 A 96.5% success rate across all tasks
    • +
    • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
    • +
    • 🚀 Model predictive control running at 50-100 Hz for all tasks
    • +
    +
    + +
    +
    + Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    +
    +
    July 9 2024
    +
    +

    + 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

    + Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
    + +
    +
    + Check out the project website, preprint, and a breaf introduction vide below. +
    +
    + +
    +
    +
    + + +
    + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/c6/94b886cb8c22dd8ce899f5be56a356c2822762d91adeb71970e768398b7d14 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/c6/94b886cb8c22dd8ce899f5be56a356c2822762d91adeb71970e768398b7d14 new file mode 100644 index 0000000..5ee88dc --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/c6/94b886cb8c22dd8ce899f5be56a356c2822762d91adeb71970e768398b7d14 @@ -0,0 +1,400 @@ +I"sP + +

    + +

    The IRIS lab focuses on three reserach directions: (1) human-autonomy alignment, (2) contact-rich dexterous manipulation, and (3) fundamental methods in robotics. Below are some recent publications in each set of research interest. +Please visit Publications page for a full list of publications.

    + +


    +
    +
    Human-autonomy alignment
    +
    + +We develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions. + +
    +
    +
      +
    • Robot learning from general human interactions
    • +
    • Planning and control for human-robot systems
    • +
    +
    + + + +
    + +
    +
    Language-Model-Assisted Bi-Level Programming for Reward Learning from Internet Videos
    +
    Harsh Mahesheka, Zhixian Xie, Zhaoran Wang, Wanxin Jin
    +
    arXiv preprint, 2024
    + +
    +
    +
    + + + +
    + +
    +
    Safe MPC Alignment with Human Directional Feedback
    +
    Zhixian Xie, Wenlong Zhang, Yi Ren, Zhaoran Wang, George. J. Pappas and Wanxin Jin
    +
    Submitted to IEEE Transactions on Robotics (T-RO), 2024
    + +
    +
    +
    + + + +
    + +
    +
    Learning from Human Directional Corrections
    +
    Wanxin Jin, Todd D Murphey, Zehui Lu, and Shaoshuai Mou
    +
    IEEE Transactions on Robotics (T-RO), 2023
    + +
    +
    +
    + + +
    + +
    +
    Learning from Sparse Demonstrations
    +
    Wanxin Jin, Todd D Murphey, Dana Kulic, Neta Ezer, and Shaoshuai Mou
    +
    IEEE Transactions on Robotics (T-RO), 2023
    + +
    +
    +
    + + + + +
    + +
    +
    Inverse Optimal Control from Incomplete Trajectory Observations
    +
    Wanxin Jin, Dana Kulic, Shaoshuai Mou, and Sandra Hirche
    +
    International Journal of Robotics Research (IJRR), 40:848–865, 2021
    + +
    +
    +
    + + + + +
    + +
    +
    Inverse Optimal Control for Multiphase cost functions
    +
    Wanxin Jin, Dana Kulic, Jonathan Lin, Shaoshuai Mou, and Sandra Hirche
    +
    IEEE Transactions on Robotics (T-RO), 35(6):1387–1398, 2019
    + +
    +
    + + + +
    +
    + +


    + +
    +
    Contact-rich manipulation
    +
    + +We develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects +

    + +
      +
    • Learning, planning, and control for contact-rich manipulation
    • +
    • Computer vision and learnable geometry for dexterous manipulation
    • +
    +
    + + + +
    + +
    +
    ContactSDF: Signed Distance Functions as Multi-Contact Models for Dexterous Manipulation
    +
    Wen Yang and Wanxin Jin
    +
    Submitted to IEEE Robotics and Automation Letters (RA-L), 2024
    + +
    +
    +
    + + +
    + +
    +
    Complementarity-Free Multi-Contact Modeling and Optimization for Dexterous Manipulation
    +
    Wanxin Jin
    +
    arXiv preprint, 2024
    + +
    +
    +
    + + + +
    + +
    +
    Task-Driven Hybrid Model Reduction for Dexterous Manipulation
    +
    Wanxin Jin and Michael Posa
    +
    IEEE Transactions on Robotics (T-RO), 2024
    + +
    +
    +
    + + + + +
    + +
    +
    Adaptive Contact-Implicit Model Predictive Control with Online Residual Learning
    +
    Wei-Cheng Huang, Alp Aydinoglu, Wanxin Jin, Michael Posa
    +
    IEEE International Conference on Robotics and Automation (ICRA), 2024
    + +
    +
    +
    + + + +
    + +
    +
    Adaptive Barrier Smoothing for First-Order Policy Gradient with Contact Dynamics
    +
    Shenao Zhang, Wanxin Jin, Zhaoran Wang
    +
    International Conference on Machine Learning (ICML), 2023
    + +
    +
    +
    + + + +
    + +
    +
    Learning Linear Complementarity Systems
    +
    Wanxin Jin, Alp Aydinoglu, Mathew Halm, and Michael Posa
    +
    Learning for Dynamics and Control (L4DC), 2022
    + +
    +
    + + + +
    +
    + +


    + +
    +
    Fundamental methods in robotics
    +
    + +We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches. + +

    + +
      +
    • Optimal control, motion plannig, reinforcement learning
    • +
    • Differentiable optimization, inverse optimization
    • +
    • Hybrid system learning and control
    • +
    +
    + + + +
    + +
    +
    Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
    +
    Wanxin Jin, Zhaoran Wang, Zhuoran Yang, and Shaoshuai Mou
    +
    Advances in Neural Information Processing Systems (NeurIPS), 2020
    + +
    +
    +
    + + +
    + +
    +
    Safe Pontryagin Differentiable Programming
    +
    Wanxin Jin, Shaoshuai Mou, and George J. Pappas
    +
    Advances in Neural Information Processing Systems (NeurIPS), 2021
    + +
    +
    +
    + + +
    + +
    +
    Robust Safe Learning and Control in Unknown Environments: An Uncertainty-Aware Control Barrier Function Approach
    +
    Jiacheng Li, Qingchen Liu, Wanxin Jin, Jiahu Qin, and Sandra Hirche
    +
    IEEE Robotics and Automation Letters (RA-L), 2023
    + +
    +
    +
    + + + + +
    + +
    +
    Enforcing Hard Constraints with Soft Barriers: Safe-driven Reinforcement Learning in Unknown Stochastic Environments
    +
    Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu
    +
    International Conference on Machine Learning (ICML), 2023
    + +
    +
    +
    + + +
    + +
    +
    A Differential Dynamic Programming Framework for Inverse Reinforcement Learning
    +
    Kun Cao, Xinhang Xu, Wanxin Jin, Karl H. Johansson, and Lihua Xie
    +
    Submitted to IEEE Transactions on Robotics (T-RO), 2024
    + +
    +
    + + +
    +
    + + + + + + +

    + + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/cb/d2f5f2df8897588c5e634ac175a56a07afca01c371824b2a44470221128d16 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/cb/d2f5f2df8897588c5e634ac175a56a07afca01c371824b2a44470221128d16 new file mode 100644 index 0000000..01c407e --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/cb/d2f5f2df8897588c5e634ac175a56a07afca01c371824b2a44470221128d16 @@ -0,0 +1,148 @@ +I"&$

    This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

    + +
      +
    • +

      Human-autonomy alignment: we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

      +
    • +
    • +

      Contact-rich dexterous manipulation: we develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects.

      +
    • +
    • +

      Fundamental methods for robot autonomy: We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches.

      +
    • +
    + +

    + +
    + +     + +     + +     + +     + +     + +
    + +


    + +

    Recent Updates

    + +

    + +
    + + +
    +
    Oct 15, 2024
    +
    +

    + 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

    +
    + +
    +
    + Check out the preprint. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + + + + +
    +
    Aug 24, 2024
    +
    +

    + 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

    +
    + +
    +
    + Check out the webpage, preprint, and code. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + +
    +
    Aug 19, 2024
    +
    +

    + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? Our answer is a resounding YES! PROUD to share: 🔥Complementarity-Free Multi-Contact Modeling and Optimization, our latest method that sets shattering benchmarks in various challenging dexterous manipulation tasks. +

    +

    🔥 Thrilled to unveil our work: "Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

    + Our method sets a new benchmark in dexterous manipulation: +
      +
    • 🎯 A 96.5% success rate across all tasks
    • +
    • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
    • +
    • 🚀 Model predictive control running at 50-100 Hz for all tasks
    • +
    +
    + +
    +
    + Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    +
    +
    July 9 2024
    +
    +

    + 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

    + Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
    + +
    +
    + Check out the project website, preprint, and a breaf introduction vide below. +
    +
    + +
    +
    +
    + + +
    + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/da/b08acf0b9431df4e01006e16fc3945525e792826c2d78a0f9469f78fef5b85 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/da/b08acf0b9431df4e01006e16fc3945525e792826c2d78a0f9469f78fef5b85 new file mode 100644 index 0000000..4f1495b --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/da/b08acf0b9431df4e01006e16fc3945525e792826c2d78a0f9469f78fef5b85 @@ -0,0 +1,148 @@ +I"$

    This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

    + +
      +
    • +

      Human-autonomy alignment: We develop innovative methods that enable robots to seamlessly understand and communicate with humans through various physical interactions. Our work includes developing adaptive learning algorithms and intuitive control interfaces to enhance representation alignment between humans and robots.

      +
    • +
    • +

      Contact-rich manipulation: We develop advanced physics-based representations and frameworks that enable robots to interact with and manipulate physical objects efficiently and precisely. Our goal is to enhance robots’ capabilities in performing complex tasks, such as assembly and sorting, in unstructured environments.

      +
    • +
    • +

      Fundamental methods for robot autonomy: We develop fundamental theories/algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based (control/optimization) and data-driven (machine learning & AI) approaches.

      +
    • +
    + +

    + +
    + +     + +     + +     + +     + +     + +
    + +


    + +

    Recent Updates

    + +

    + +
    + + +
    +
    Oct 15, 2024
    +
    +

    + 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

    +
    + +
    +
    + Check out the preprint. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + + + + +
    +
    Aug 24, 2024
    +
    +

    + 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

    +
    + +
    +
    + Check out the webpage, preprint, and code. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + +
    +
    Aug 19, 2024
    +
    +

    + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? YES! The key lies in our new "effective yet optimization-friendly multi-contact model." +

    +

    🔥 Thrilled to unveil our work: "Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

    + Our method sets a new benchmark in dexterous manipulation: +
      +
    • 🎯 A 96.5% success rate across all tasks
    • +
    • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
    • +
    • 🚀 Model predictive control running at 50-100 Hz for all tasks
    • +
    +
    + +
    +
    + Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    +
    +
    July 9 2024
    +
    +

    + 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

    + Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
    + +
    +
    + Check out the project website, preprint, and a breaf introduction vide below. +
    +
    + +
    +
    +
    + + +
    + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/e5/95a1007c75016db20e2f7d492967f23a4815a1e907ba34d8675f729a31edbb b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/e5/95a1007c75016db20e2f7d492967f23a4815a1e907ba34d8675f729a31edbb new file mode 100644 index 0000000..8cfae5e --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/e5/95a1007c75016db20e2f7d492967f23a4815a1e907ba34d8675f729a31edbb @@ -0,0 +1,148 @@ +I"R$

    This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

    + +
      +
    • +

      Human-autonomy alignment: we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through natural interactions.

      +
    • +
    • +

      Contact-rich manipulation: We develop advanced physics-based representations and frameworks that enable robots to interact with and manipulate physical objects efficiently and precisely. Our goal is to enhance robots’ capabilities in performing complex tasks, such as assembly and sorting, in unstructured environments.

      +
    • +
    • +

      Fundamental methods for robot autonomy: We develop fundamental theories/algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based (control/optimization) and data-driven (machine learning & AI) approaches.

      +
    • +
    + +

    + +
    + +     + +     + +     + +     + +     + +
    + +


    + +

    Recent Updates

    + +

    + +
    + + +
    +
    Oct 15, 2024
    +
    +

    + 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

    +
    + +
    +
    + Check out the preprint. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + + + + +
    +
    Aug 24, 2024
    +
    +

    + 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

    +
    + +
    +
    + Check out the webpage, preprint, and code. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + +
    +
    Aug 19, 2024
    +
    +

    + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? YES! The key lies in our new "effective yet optimization-friendly multi-contact model." +

    +

    🔥 Thrilled to unveil our work: "Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

    + Our method sets a new benchmark in dexterous manipulation: +
      +
    • 🎯 A 96.5% success rate across all tasks
    • +
    • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
    • +
    • 🚀 Model predictive control running at 50-100 Hz for all tasks
    • +
    +
    + +
    +
    + Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    +
    +
    July 9 2024
    +
    +

    + 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

    + Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
    + +
    +
    + Check out the project website, preprint, and a breaf introduction vide below. +
    +
    + +
    +
    +
    + + +
    + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/e7/0c957fedd863e5d4c80c7929509e968e7dbc43cdb0de759703e2d38cdba969 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/e7/0c957fedd863e5d4c80c7929509e968e7dbc43cdb0de759703e2d38cdba969 new file mode 100644 index 0000000..797dc6d --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/e7/0c957fedd863e5d4c80c7929509e968e7dbc43cdb0de759703e2d38cdba969 @@ -0,0 +1,148 @@ +I"%$

    This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

    + +
      +
    • +

      Human-autonomy alignment: we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

      +
    • +
    • +

      Contact-rich dexterous manipulation: we develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects.

      +
    • +
    • +

      Fundamental methods for robot autonomy: We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches.

      +
    • +
    + +

    + +
    + +     + +     + +     + +     + +     + +
    + +


    + +

    Recent Updates

    + +

    + +
    + + +
    +
    Oct 15, 2024
    +
    +

    + 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

    +
    + +
    +
    + Check out the preprint. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + + + + +
    +
    Aug 24, 2024
    +
    +

    + 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

    +
    + +
    +
    + Check out the webpage, preprint, and code. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + +
    +
    Aug 19, 2024
    +
    +

    + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? Our answer is a resounding YES! PROUD to share: 🔥Complementarity-Free Multi-Contact Modeling and Optimization, our latest method that sets shattering benchmarks in various challenging dexterous manipulation tasks. +

    +

    🔥 Thrilled to share our work: "Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

    + Our method sets a new benchmark in dexterous manipulation: +
      +
    • 🎯 A 96.5% success rate across all tasks
    • +
    • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
    • +
    • 🚀 Model predictive control running at 50-100 Hz for all tasks
    • +
    +
    + +
    +
    + Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    +
    +
    July 9 2024
    +
    +

    + 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

    + Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
    + +
    +
    + Check out the project website, preprint, and a breaf introduction vide below. +
    +
    + +
    +
    +
    + + +
    + +:ET \ No newline at end of file diff --git a/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/ef/ca00114274981aa68bfa620fdc5c90b7e7e927b5b84c1a94016c715caeb571 b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/ef/ca00114274981aa68bfa620fdc5c90b7e7e927b5b84c1a94016c715caeb571 new file mode 100644 index 0000000..ed5afdf --- /dev/null +++ b/.jekyll-cache/Jekyll/Cache/Jekyll--Converters--Markdown/ef/ca00114274981aa68bfa620fdc5c90b7e7e927b5b84c1a94016c715caeb571 @@ -0,0 +1,148 @@ +I"#

    This is Intelligent Robotics and Interactive Systems (IRIS) Lab! Our research focuses include

    + +
      +
    • +

      Human-autonomy alignment: we develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

      +
    • +
    • +

      Contact-rich dexterous manipulation: we develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects.

      +
    • +
    • +

      Fundamental methods for robot autonomy: We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based (control/optimization) and data-driven (machine learning & AI) approaches.

      +
    • +
    + +

    + +
    + +     + +     + +     + +     + +     + +
    + +


    + +

    Recent Updates

    + +

    + +
    + + +
    +
    Oct 15, 2024
    +
    +

    + 🔥🔥 “Skills from YouTube, No Prep!” 🔥🔥 + Can robots learn skills from YouTube without complex video processing? + Our "Language-Model-Driven Bi-level Method” makes it possible! By chaining VLM & LLM in a bi-level framework, we use the “chain rule” to guide reward learning directly from video demos. 🚀Check out our RL agents mastering skills from their biological counterparts!🚀 +

    +
    + +
    +
    + Check out the preprint. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + + + + +
    +
    Aug 24, 2024
    +
    +

    + 🚀 Can a robotic hand master dexterous manipulation in just 2 minutes? YES! 🎉 Excited to share our recent work “ContactSDF”, a physics-inspired representation using signed distance functions (SDFs) for contact-rich manipulation, from geometry to MPC. 🔥 Watch a full, uncut video of Allegro hand learning from scratch below! We are pushing the boundaries of “fast” learning and planning in dexterous manipulation. +

    +
    + +
    +
    + Check out the webpage, preprint, and code. Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    + +
    +
    Aug 19, 2024
    +
    +

    + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? YES! The key lies in our new "effective yet optimization-friendly multi-contact model." +

    +

    🔥 Thrilled to unveil our work: "Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

    + Our method sets a new benchmark in dexterous manipulation: +
      +
    • 🎯 A 96.5% success rate across all tasks
    • +
    • ⚙️ High manipulation accuracy: 11° reorientation error & 7.8 mm position error
    • +
    • 🚀 Model predictive control running at 50-100 Hz for all tasks
    • +
    +
    + +
    +
    + Check out our preprint, and try out our code (fun guaranteed). Here is a long demo: +

    +
    + +
    +
    +
    + + + +


    +
    +
    July 9 2024
    +
    +

    + 🤖 Robots may be good at inferring a task reward from human feedback, but how about inferring safety boundaries from human feedback? In many cases such as robot feeding and liquid pouring, specifying user-comfortable safety constraints is more challenging than rewards. Our recent work, led by my PhD student Zhixian Xie, shows that this is possible, and can actually be very human-effort efficient! Our method is called Safe MPC Alignment (submitted to T-RO), enabling a robot to learn its control safety constraints with only a small handful of human online corrections! +

    + Importantly, the Safe MPC Alignment is certifiable: providing an upper bound on the total number of human feedback in the case of successful learning of safety constraints, or declaring the misspecification of the hypothesis space, i.e., the true implicit safety constraint cannot be found within the specified hypothesis space. +
    + +
    +
    + Check out the project website, preprint, and a breaf introduction vide below. +
    +
    + +
    +
    +
    + + +
    + +:ET \ No newline at end of file diff --git a/_pages/0.about.md b/_pages/0.about.md index 4cb7050..3817638 100644 --- a/_pages/0.about.md +++ b/_pages/0.about.md @@ -20,11 +20,11 @@ profile: This is **Intelligent Robotics and Interactive Systems (IRIS)** Lab! Our research focuses include -- **Human-robot alignment:** We develop innovative methods that enable robots to seamlessly understand and communicate with humans through various physical interactions. Our work includes developing adaptive learning algorithms and intuitive control interfaces to enhance representation alignment between humans and robots. +- **Human-autonomy alignment:** We develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions. -- **Contact-rich manipulation:** We develop advanced physics-based representations and frameworks that enable robots to interact with and manipulate physical objects efficiently and precisely. Our goal is to enhance robots’ capabilities in performing complex tasks, such as assembly and sorting, in unstructured environments. +- **Contact-rich dexterous manipulation:** We develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects. -- **Fundamental methods for robot autonomy:** We develop fundamental theories/algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based (control/optimization) and data-driven (machine learning & AI) approaches. +- **Fundamental computational methods:** We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches. @@ -119,9 +119,9 @@ This is **Intelligent Robotics and Interactive Systems (IRIS)** Lab! Our researc
    Aug 19, 2024

    - Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? YES! The key lies in our new "effective yet optimization-friendly multi-contact model." + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? Our answer is a resounding YES! PROUD to share: 🔥🔥"Complementarity-Free Multi-Contact Modeling and Optimization,", our latest method that sets shattering benchmarks in various challenging dexterous manipulation tasks.

    -

    🔥 Thrilled to unveil our work: "Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

    "Complementarity-Free Multi-Contact Modeling and Optimization," consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below!

    Our method sets a new benchmark in dexterous manipulation:
      diff --git a/_pages/1.research.md b/_pages/1.research.md index ec734b8..7967cb2 100644 --- a/_pages/1.research.md +++ b/_pages/1.research.md @@ -14,16 +14,16 @@ nav: true

      -The IRIS lab focuses on three reserach directions: **(1) human-robot alignment**, **(2) contact-rich manipulation**, and **(3) fundamental methods in robotics**. Below are some recent publications in each set of research interest. +The IRIS lab focuses on three reserach directions: **(1) human-autonomy alignment**, **(2) contact-rich dexterous manipulation**, and **(3) fundamental methods in robotics**. Below are some recent publications in each set of research interest. Please visit [Publications](../publications){:target="_blank"} page for a full list of publications.
      -
      Human-robot alignment
      +
      Human-autonomy alignment
      -We develop methods to empower a robot with the ability to efficiently understand and be understood by human users through a variety of physical interactions. We explore how robots can aptly respond to and collaborate meaningfully with users. +We develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

      @@ -143,12 +143,10 @@ We develop methods to empower a robot with the ability to efficiently understand
      -
      Contact-rich manipulation
      +
      Contact-rich dexterous manipulation
      - -We aim to leverage physical principles to develop efficient representations or models for robot's physical interaction with environments. We also focus on developing algorithms to enable robots efficiently and robustly manipulate their surroundings/objects through contact. - +We develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects

        @@ -271,7 +269,7 @@ We aim to leverage physical principles to develop efficient representations or
        Fundamental methods in robotics
        -We focus on developing fundamental theories and algorithms for achieving efficient, safe, and robust robot intelligence. Our methods lie at the intersection of model-based (control and optimization) and data-driven approaches, harnessing the complementary benefits of both. +We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches.

        diff --git a/_pages/2.publications.md b/_pages/2.publications.md index cb33fb6..7736210 100644 --- a/_pages/2.publications.md +++ b/_pages/2.publications.md @@ -47,8 +47,8 @@ nav: true
        All -Human-robot alignment -Contact-rich manipulation +Human-autonomy alignment +Contact-rich dexterous manipulation Fundamental methods
        diff --git a/_site/CNAME b/_site/CNAME new file mode 100644 index 0000000..b7b917b --- /dev/null +++ b/_site/CNAME @@ -0,0 +1 @@ +irislab.tech \ No newline at end of file diff --git a/_site/feed.xml b/_site/feed.xml index a8a5e54..24b376a 100644 --- a/_site/feed.xml +++ b/_site/feed.xml @@ -1 +1 @@ -Jekyll2024-10-15T22:11:31-07:00http://localhost:4000/feed.xmlblank \ No newline at end of file +Jekyll2024-11-09T08:46:03-07:00http://localhost:4000/feed.xmlblank \ No newline at end of file diff --git a/_site/index.html b/_site/index.html index 717c29b..c5fb56a 100644 --- a/_site/index.html +++ b/_site/index.html @@ -226,13 +226,13 @@
        • -

          Human-robot alignment: We develop innovative methods that enable robots to seamlessly understand and communicate with humans through various physical interactions. Our work includes developing adaptive learning algorithms and intuitive control interfaces to enhance representation alignment between humans and robots.

          +

          Human-autonomy alignment: We develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

        • -

          Contact-rich manipulation: We develop advanced physics-based representations and frameworks that enable robots to interact with and manipulate physical objects efficiently and precisely. Our goal is to enhance robots’ capabilities in performing complex tasks, such as assembly and sorting, in unstructured environments.

          +

          Contact-rich dexterous manipulation: We develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects.

        • -

          Fundamental methods for robot autonomy: We develop fundamental theories/algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based (control/optimization) and data-driven (machine learning & AI) approaches.

          +

          Fundamental computational methods: We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches.

        @@ -318,9 +318,9 @@

        Recent Updates

        Aug 19, 2024

        - Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? YES! The key lies in our new "effective yet optimization-friendly multi-contact model." + Can model-based planning and control rival or even surpass reinforcement learning in challenging dexterous manipulation tasks? Our answer is a resounding YES! PROUD to share: 🔥🔥"Complementarity-Free Multi-Contact Modeling and Optimization,", our latest method that sets shattering benchmarks in various challenging dexterous manipulation tasks.

        -

        🔥 Thrilled to unveil our work: "Complementarity-Free Multi-Contact Modeling and Optimization," which consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below! +

        "Complementarity-Free Multi-Contact Modeling and Optimization," consistently achieves state-of-the-art results across different challenging dexterous manipulation tasks, including fingertip 3D in-air manipulation, TriFinger in-hand manipulation, and Allegro hand on-palm manipulation, all with different objects. Check out the demo below!

        Our method sets a new benchmark in dexterous manipulation:
          @@ -385,7 +385,7 @@

          Recent Updates

          - Last updated: October 15, 2024. + Last updated: November 09, 2024.
        diff --git a/_site/joining/index.html b/_site/joining/index.html index 27cef3b..6d3ccac 100644 --- a/_site/joining/index.html +++ b/_site/joining/index.html @@ -313,7 +313,7 @@
        How to Appl - Last updated: October 15, 2024. + Last updated: November 09, 2024.
        diff --git a/_site/people/index.html b/_site/people/index.html index f62f7f8..ca39a91 100644 --- a/_site/people/index.html +++ b/_site/people/index.html @@ -423,7 +423,7 @@

        Parker Ferguson

        - Last updated: October 15, 2024. + Last updated: November 09, 2024.
      diff --git a/_site/posts/index.html b/_site/posts/index.html index e5c3abe..9dd7f47 100644 --- a/_site/posts/index.html +++ b/_site/posts/index.html @@ -516,7 +516,7 @@
      Examp - Last updated: October 15, 2024. + Last updated: November 09, 2024.
      diff --git a/_site/publications/index.html b/_site/publications/index.html index 35627f0..68a34b2 100644 --- a/_site/publications/index.html +++ b/_site/publications/index.html @@ -261,8 +261,8 @@
      All -Human-robot alignment -Contact-rich manipulation +Human-autonomy alignment +Contact-rich dexterous manipulation Fundamental methods
      @@ -851,7 +851,7 @@

      2019

      - Last updated: October 15, 2024. + Last updated: November 09, 2024.
      diff --git a/_site/research/index.html b/_site/research/index.html index c9bb0c0..8025580 100644 --- a/_site/research/index.html +++ b/_site/research/index.html @@ -229,15 +229,15 @@

      -

      The IRIS lab focuses on three reserach directions: (1) human-robot alignment, (2) contact-rich manipulation, and (3) fundamental methods in robotics. Below are some recent publications in each set of research interest. +

      The IRIS lab focuses on three reserach directions: (1) human-autonomy alignment, (2) contact-rich dexterous manipulation, and (3) fundamental methods in robotics. Below are some recent publications in each set of research interest. Please visit Publications page for a full list of publications.


      -
      Human-robot alignment
      +
      Human-autonomy alignment
      -We develop methods to empower a robot with the ability to efficiently understand and be understood by human users through a variety of physical interactions. We explore how robots can aptly respond to and collaborate meaningfully with users. +We develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

      @@ -355,12 +355,10 @@


      -
      Contact-rich manipulation
      +
      Contact-rich dexterous manipulation
      - -We aim to leverage physical principles to develop efficient representations or models for robot's physical interaction with environments. We also focus on developing algorithms to enable robots efficiently and robustly manipulate their surroundings/objects through contact. - +We develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects

        @@ -480,7 +478,7 @@
        Fundamental methods in robotics
        -We focus on developing fundamental theories and algorithms for achieving efficient, safe, and robust robot intelligence. Our methods lie at the intersection of model-based (control and optimization) and data-driven approaches, harnessing the complementary benefits of both. +We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches.

        @@ -642,7 +640,7 @@ - Last updated: October 15, 2024. + Last updated: November 09, 2024.
        diff --git a/_site/robots/index.html b/_site/robots/index.html index 2fb543b..724c3d1 100644 --- a/_site/robots/index.html +++ b/_site/robots/index.html @@ -307,7 +307,7 @@

        IRIS Lab GPU Computing (4 - Last updated: October 15, 2024. + Last updated: November 09, 2024.

      diff --git a/_site/teaching/index.html b/_site/teaching/index.html index e5f9266..a3363f7 100644 --- a/_site/teaching/index.html +++ b/_site/teaching/index.html @@ -256,7 +256,7 @@