From c5e56b185c0944a2fd6c41ed17c173ec55c4c0cc Mon Sep 17 00:00:00 2001 From: "Daniel J. McDonald" Date: Thu, 2 Nov 2023 08:07:52 -0700 Subject: [PATCH] multiplex boosting --- .../20-boosting/execute-results/html.json | 4 +- .../revealjs/plugin/multiplex/multiplex.js | 57 + .../revealjs/plugin/multiplex/plugin.yml | 8 + .../revealjs/plugin/multiplex/socket.io.js | 9 + schedule/slides/20-boosting-speaker.html | 1084 +++++++++++++++++ schedule/slides/20-boosting.qmd | 4 +- 6 files changed, 1163 insertions(+), 3 deletions(-) create mode 100644 _freeze/site_libs/revealjs/plugin/multiplex/multiplex.js create mode 100644 _freeze/site_libs/revealjs/plugin/multiplex/plugin.yml create mode 100644 _freeze/site_libs/revealjs/plugin/multiplex/socket.io.js create mode 100644 schedule/slides/20-boosting-speaker.html diff --git a/_freeze/schedule/slides/20-boosting/execute-results/html.json b/_freeze/schedule/slides/20-boosting/execute-results/html.json index f54a357..a710d7f 100644 --- a/_freeze/schedule/slides/20-boosting/execute-results/html.json +++ b/_freeze/schedule/slides/20-boosting/execute-results/html.json @@ -1,7 +1,7 @@ { - "hash": "ec86535b178e899a578c3a3c7779af0c", + "hash": "4e41ba0061438c4140349f5002e37fd6", "result": { - "markdown": "---\nlecture: \"20 Boosting\"\nformat: revealjs\nmetadata-files: \n - _metadata.yml\n---\n---\n---\n\n## {{< meta lecture >}} {.large background-image=\"gfx/smooths.svg\" background-opacity=\"0.3\"}\n\n[Stat 406]{.secondary}\n\n[{{< meta author >}}]{.secondary}\n\nLast modified -- 12 October 2023\n\n\n\n$$\n\\DeclareMathOperator*{\\argmin}{argmin}\n\\DeclareMathOperator*{\\argmax}{argmax}\n\\DeclareMathOperator*{\\minimize}{minimize}\n\\DeclareMathOperator*{\\maximize}{maximize}\n\\DeclareMathOperator*{\\find}{find}\n\\DeclareMathOperator{\\st}{subject\\,\\,to}\n\\newcommand{\\E}{E}\n\\newcommand{\\Expect}[1]{\\E\\left[ #1 \\right]}\n\\newcommand{\\Var}[1]{\\mathrm{Var}\\left[ #1 \\right]}\n\\newcommand{\\Cov}[2]{\\mathrm{Cov}\\left[#1,\\ #2\\right]}\n\\newcommand{\\given}{\\ \\vert\\ }\n\\newcommand{\\X}{\\mathbf{X}}\n\\newcommand{\\x}{\\mathbf{x}}\n\\newcommand{\\y}{\\mathbf{y}}\n\\newcommand{\\P}{\\mathcal{P}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\norm}[1]{\\left\\lVert #1 \\right\\rVert}\n\\newcommand{\\snorm}[1]{\\lVert #1 \\rVert}\n\\newcommand{\\tr}[1]{\\mbox{tr}(#1)}\n\\newcommand{\\brt}{\\widehat{\\beta}^R_{s}}\n\\newcommand{\\brl}{\\widehat{\\beta}^R_{\\lambda}}\n\\newcommand{\\bls}{\\widehat{\\beta}_{ols}}\n\\newcommand{\\blt}{\\widehat{\\beta}^L_{s}}\n\\newcommand{\\bll}{\\widehat{\\beta}^L_{\\lambda}}\n$$\n\n\n\n\n\n## Last time\n\n\n\nWe learned about bagging, for averaging [low-bias]{.secondary} / [high-variance]{.tertiary} estimators.\n\nToday, we examine it's opposite: Boosting.\n\nBoosting also combines estimators, but it combines [high-bias]{.secondary} / [low-variance]{.tertiary} estimators.\n\nBoosting has a number of flavours. And if you Google descriptions, most are wrong.\n\nFor a deep (and accurate) treatment, see [ESL] Chapter 10\n\n\n. . .\n\nWe'll discuss 2 flavours: [AdaBoost]{.secondary} and [Gradient Boosting]{.secondary}\n\nNeither requires a tree, but that's the typical usage.\n\nBoosting needs a \"weak learner\", so small trees (stumps) are natural.\n\n\n\n## AdaBoost intuition (for classification)\n\nAt each iteration, we weight the [observations]{.secondary}.\n\nObservations that are currently misclassified, get [higher]{.tertiary} weights.\n\nSo on the next iteration, we'll try harder to correctly classify our mistakes.\n\nThe number of iterations must be chosen.\n\n\n\n## AdaBoost (Freund and Schapire, generic)\n\nLet $G(x, \\theta)$ be any weak learner \n\n⛭ imagine a tree with one split: then $\\theta=$ (feature, split point)\n\n\n\nAlgorithm (AdaBoost) 🛠️\n\n* Set observation weights $w_i=1/n$.\n* Until we quit ( $m\n mutate(mobile = as.factor(Mobility > .1)) |>\n select(-ID, -Name, -Mobility, -State) |>\n drop_na()\nn <- nrow(mob)\ntrainidx <- sample.int(n, floor(n * .75))\ntestidx <- setdiff(1:n, trainidx)\ntrain <- mob[trainidx, ]\ntest <- mob[testidx, ]\nrf <- randomForest(mobile ~ ., data = train)\nbag <- randomForest(mobile ~ ., data = train, mtry = ncol(mob) - 1)\npreds <- tibble(truth = test$mobile, rf = predict(rf, test), bag = predict(bag, test))\n```\n:::\n\n::: {.cell layout-align=\"center\" output-location='column-fragment'}\n\n```{.r .cell-code code-line-numbers=\"1-6|7-12|17|\"}\nlibrary(gbm)\ntrain_boost <- train |>\n mutate(mobile = as.integer(mobile) - 1)\n# needs {0, 1} responses\ntest_boost <- test |>\n mutate(mobile = as.integer(mobile) - 1)\nadab <- gbm(\n mobile ~ .,\n data = train_boost,\n n.trees = 500,\n distribution = \"adaboost\"\n)\npreds$adab <- as.numeric(\n predict(adab, test_boost) > 0\n)\npar(mar = c(5, 11, 0, 1))\ns <- summary(adab, las = 1)\n```\n\n::: {.cell-output-display}\n![](20-boosting_files/figure-revealjs/unnamed-chunk-2-1.svg){fig-align='center'}\n:::\n:::\n\n\n\n## Forward stagewise additive modeling (FSAM, completely generic)\n\nAlgorithm 🛠️\n\n* Set initial predictor $f_0(x)=0$\n* Until we quit ( $m 0) != truth)), 2)\n )\n ) +\n annotate(\"text\",\n x = 4, y = -5, color = red,\n label = paste(\"adaboost error\\n\", round(with(boost_preds, mean((adaboost > 0) != truth)), 2))\n )\nboost_oob <- tibble(\n adaboost = adab$oobag.improve, gbm = grad_boost$oobag.improve,\n ntrees = 1:500\n)\ng2 <- boost_oob %>%\n pivot_longer(-ntrees, values_to = \"OOB_Error\") %>%\n ggplot(aes(x = ntrees, y = OOB_Error, color = name)) +\n geom_line() +\n scale_color_manual(values = c(orange, blue)) +\n theme(legend.title = element_blank())\nplot_grid(g1, g2, rel_widths = c(.4, .6))\n```\n\n::: {.cell-output-display}\n![](20-boosting_files/figure-revealjs/unnamed-chunk-3-1.svg){fig-align='center'}\n:::\n:::\n\n\n\n\n## Major takeaways\n\n* Two flavours of Boosting \n 1. AdaBoost (the original) and \n 2. gradient boosting (easier and more computationally friendly)\n\n* The connection is \"Forward stagewise additive modelling\" (AdaBoost is a special case)\n\n* The connection reveals that AdaBoost \"isn't robust because it uses exponential loss\" (squared error is even worse)\n\n* Gradient boosting is a computationally easier version of FSAM\n\n* All use **weak learners** (compare to Bagging)\n\n* Think about the Bias-Variance implications\n\n* You can use these for regression or classification\n\n* You can do this with other weak learners besides trees.\n\n\n\n# Next time...\n\nNeural networks and deep learning, the beginning\n", + "markdown": "---\nlecture: \"20 Boosting\"\nformat: \n revealjs:\n multiplex: true\nmetadata-files: \n - _metadata.yml\n---\n---\n---\n\n## {{< meta lecture >}} {.large background-image=\"gfx/smooths.svg\" background-opacity=\"0.3\"}\n\n[Stat 406]{.secondary}\n\n[{{< meta author >}}]{.secondary}\n\nLast modified -- 02 November 2023\n\n\n\n$$\n\\DeclareMathOperator*{\\argmin}{argmin}\n\\DeclareMathOperator*{\\argmax}{argmax}\n\\DeclareMathOperator*{\\minimize}{minimize}\n\\DeclareMathOperator*{\\maximize}{maximize}\n\\DeclareMathOperator*{\\find}{find}\n\\DeclareMathOperator{\\st}{subject\\,\\,to}\n\\newcommand{\\E}{E}\n\\newcommand{\\Expect}[1]{\\E\\left[ #1 \\right]}\n\\newcommand{\\Var}[1]{\\mathrm{Var}\\left[ #1 \\right]}\n\\newcommand{\\Cov}[2]{\\mathrm{Cov}\\left[#1,\\ #2\\right]}\n\\newcommand{\\given}{\\ \\vert\\ }\n\\newcommand{\\X}{\\mathbf{X}}\n\\newcommand{\\x}{\\mathbf{x}}\n\\newcommand{\\y}{\\mathbf{y}}\n\\newcommand{\\P}{\\mathcal{P}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\norm}[1]{\\left\\lVert #1 \\right\\rVert}\n\\newcommand{\\snorm}[1]{\\lVert #1 \\rVert}\n\\newcommand{\\tr}[1]{\\mbox{tr}(#1)}\n\\newcommand{\\brt}{\\widehat{\\beta}^R_{s}}\n\\newcommand{\\brl}{\\widehat{\\beta}^R_{\\lambda}}\n\\newcommand{\\bls}{\\widehat{\\beta}_{ols}}\n\\newcommand{\\blt}{\\widehat{\\beta}^L_{s}}\n\\newcommand{\\bll}{\\widehat{\\beta}^L_{\\lambda}}\n\\newcommand{\\U}{\\mathbf{U}}\n\\newcommand{\\D}{\\mathbf{D}}\n\\newcommand{\\V}{\\mathbf{V}}\n$$\n\n\n\n\n\n## Last time\n\n\n\nWe learned about bagging, for averaging [low-bias]{.secondary} / [high-variance]{.tertiary} estimators.\n\nToday, we examine it's opposite: Boosting.\n\nBoosting also combines estimators, but it combines [high-bias]{.secondary} / [low-variance]{.tertiary} estimators.\n\nBoosting has a number of flavours. And if you Google descriptions, most are wrong.\n\nFor a deep (and accurate) treatment, see [ESL] Chapter 10\n\n\n. . .\n\nWe'll discuss 2 flavours: [AdaBoost]{.secondary} and [Gradient Boosting]{.secondary}\n\nNeither requires a tree, but that's the typical usage.\n\nBoosting needs a \"weak learner\", so small trees (stumps) are natural.\n\n\n\n## AdaBoost intuition (for classification)\n\nAt each iteration, we weight the [observations]{.secondary}.\n\nObservations that are currently misclassified, get [higher]{.tertiary} weights.\n\nSo on the next iteration, we'll try harder to correctly classify our mistakes.\n\nThe number of iterations must be chosen.\n\n\n\n## AdaBoost (Freund and Schapire, generic)\n\nLet $G(x, \\theta)$ be any weak learner \n\n⛭ imagine a tree with one split: then $\\theta=$ (feature, split point)\n\n\n\nAlgorithm (AdaBoost) 🛠️\n\n* Set observation weights $w_i=1/n$.\n* Until we quit ( $m\n mutate(mobile = as.factor(Mobility > .1)) |>\n select(-ID, -Name, -Mobility, -State) |>\n drop_na()\nn <- nrow(mob)\ntrainidx <- sample.int(n, floor(n * .75))\ntestidx <- setdiff(1:n, trainidx)\ntrain <- mob[trainidx, ]\ntest <- mob[testidx, ]\nrf <- randomForest(mobile ~ ., data = train)\nbag <- randomForest(mobile ~ ., data = train, mtry = ncol(mob) - 1)\npreds <- tibble(truth = test$mobile, rf = predict(rf, test), bag = predict(bag, test))\n```\n:::\n\n::: {.cell layout-align=\"center\" output-location='column-fragment'}\n\n```{.r .cell-code code-line-numbers=\"1-6|7-12|17|\"}\nlibrary(gbm)\ntrain_boost <- train |>\n mutate(mobile = as.integer(mobile) - 1)\n# needs {0, 1} responses\ntest_boost <- test |>\n mutate(mobile = as.integer(mobile) - 1)\nadab <- gbm(\n mobile ~ .,\n data = train_boost,\n n.trees = 500,\n distribution = \"adaboost\"\n)\npreds$adab <- as.numeric(\n predict(adab, test_boost) > 0\n)\npar(mar = c(5, 11, 0, 1))\ns <- summary(adab, las = 1)\n```\n\n::: {.cell-output-display}\n![](20-boosting_files/figure-revealjs/unnamed-chunk-2-1.svg){fig-align='center'}\n:::\n:::\n\n\n\n## Forward stagewise additive modeling (FSAM, completely generic)\n\nAlgorithm 🛠️\n\n* Set initial predictor $f_0(x)=0$\n* Until we quit ( $m 0) != truth)), 2)\n )\n ) +\n annotate(\"text\",\n x = 4, y = -5, color = red,\n label = paste(\"adaboost error\\n\", round(with(boost_preds, mean((adaboost > 0) != truth)), 2))\n )\nboost_oob <- tibble(\n adaboost = adab$oobag.improve, gbm = grad_boost$oobag.improve,\n ntrees = 1:500\n)\ng2 <- boost_oob %>%\n pivot_longer(-ntrees, values_to = \"OOB_Error\") %>%\n ggplot(aes(x = ntrees, y = OOB_Error, color = name)) +\n geom_line() +\n scale_color_manual(values = c(orange, blue)) +\n theme(legend.title = element_blank())\nplot_grid(g1, g2, rel_widths = c(.4, .6))\n```\n\n::: {.cell-output-display}\n![](20-boosting_files/figure-revealjs/unnamed-chunk-3-1.svg){fig-align='center'}\n:::\n:::\n\n\n\n\n## Major takeaways\n\n* Two flavours of Boosting \n 1. AdaBoost (the original) and \n 2. gradient boosting (easier and more computationally friendly)\n\n* The connection is \"Forward stagewise additive modelling\" (AdaBoost is a special case)\n\n* The connection reveals that AdaBoost \"isn't robust because it uses exponential loss\" (squared error is even worse)\n\n* Gradient boosting is a computationally easier version of FSAM\n\n* All use **weak learners** (compare to Bagging)\n\n* Think about the Bias-Variance implications\n\n* You can use these for regression or classification\n\n* You can do this with other weak learners besides trees.\n\n\n\n# Next time...\n\nNeural networks and deep learning, the beginning\n", "supporting": [ "20-boosting_files" ], diff --git a/_freeze/site_libs/revealjs/plugin/multiplex/multiplex.js b/_freeze/site_libs/revealjs/plugin/multiplex/multiplex.js new file mode 100644 index 0000000..c15414e --- /dev/null +++ b/_freeze/site_libs/revealjs/plugin/multiplex/multiplex.js @@ -0,0 +1,57 @@ +(function() { + + // emulate async script load + window.addEventListener( 'load', function() { + var multiplex = Reveal.getConfig().multiplex; + var socketId = multiplex.id; + var socket = io.connect(multiplex.url); + + function post( evt ) { + var messageData = { + state: Reveal.getState(), + secret: multiplex.secret, + socketId: multiplex.id, + content: (evt || {}).content + }; + socket.emit( 'multiplex-statechanged', messageData ); + }; + + // master + if (multiplex.secret !== null) { + + // Don't emit events from inside of notes windows + if ( window.location.search.match( /receiver/gi ) ) { return; } + + // post once the page is loaded, so the client follows also on "open URL". + post(); + + // Monitor events that trigger a change in state + Reveal.on( 'slidechanged', post ); + Reveal.on( 'fragmentshown', post ); + Reveal.on( 'fragmenthidden', post ); + Reveal.on( 'overviewhidden', post ); + Reveal.on( 'overviewshown', post ); + Reveal.on( 'paused', post ); + Reveal.on( 'resumed', post ); + document.addEventListener( 'send', post ); // broadcast custom events sent by other plugins + + // client + } else { + socket.on(multiplex.id, function(message) { + // ignore data from sockets that aren't ours + if (message.socketId !== socketId) { return; } + if( window.location.host === 'localhost:1947' ) return; + + if ( message.state ) { + Reveal.setState(message.state); + } + if ( message.content ) { + // forward custom events to other plugins + var event = new CustomEvent('received'); + event.content = message.content; + document.dispatchEvent( event ); + } + }); + } + }); +}()); \ No newline at end of file diff --git a/_freeze/site_libs/revealjs/plugin/multiplex/plugin.yml b/_freeze/site_libs/revealjs/plugin/multiplex/plugin.yml new file mode 100644 index 0000000..9ccda63 --- /dev/null +++ b/_freeze/site_libs/revealjs/plugin/multiplex/plugin.yml @@ -0,0 +1,8 @@ +name: multiplex +script: [socket.io.js, multiplex.js] +register: false +config: + multiplex: + secret: null + id: null + url: "https://reveal-multiplex.glitch.me/" diff --git a/_freeze/site_libs/revealjs/plugin/multiplex/socket.io.js b/_freeze/site_libs/revealjs/plugin/multiplex/socket.io.js new file mode 100644 index 0000000..270777b --- /dev/null +++ b/_freeze/site_libs/revealjs/plugin/multiplex/socket.io.js @@ -0,0 +1,9 @@ +/*! + * Socket.IO v2.3.0 + * (c) 2014-2019 Guillermo Rauch + * Released under the MIT License. + */ +!function(t,e){"object"==typeof exports&&"object"==typeof module?module.exports=e():"function"==typeof define&&define.amd?define([],e):"object"==typeof exports?exports.io=e():t.io=e()}(this,function(){return function(t){function e(r){if(n[r])return n[r].exports;var o=n[r]={exports:{},id:r,loaded:!1};return t[r].call(o.exports,o,o.exports,e),o.loaded=!0,o.exports}var n={};return e.m=t,e.c=n,e.p="",e(0)}([function(t,e,n){function r(t,e){"object"==typeof t&&(e=t,t=void 0),e=e||{};var n,r=o(t),i=r.source,u=r.id,p=r.path,h=c[u]&&p in c[u].nsps,f=e.forceNew||e["force new connection"]||!1===e.multiplex||h;return f?(a("ignoring socket cache for %s",i),n=s(i,e)):(c[u]||(a("new io instance for %s",i),c[u]=s(i,e)),n=c[u]),r.query&&!e.query&&(e.query=r.query),n.socket(r.path,e)}var o=n(1),i=n(7),s=n(15),a=n(3)("socket.io-client");t.exports=e=r;var c=e.managers={};e.protocol=i.protocol,e.connect=r,e.Manager=n(15),e.Socket=n(39)},function(t,e,n){function r(t,e){var n=t;e=e||"undefined"!=typeof location&&location,null==t&&(t=e.protocol+"//"+e.host),"string"==typeof t&&("/"===t.charAt(0)&&(t="/"===t.charAt(1)?e.protocol+t:e.host+t),/^(https?|wss?):\/\//.test(t)||(i("protocol-less url %s",t),t="undefined"!=typeof e?e.protocol+"//"+t:"https://"+t),i("parse %s",t),n=o(t)),n.port||(/^(http|ws)$/.test(n.protocol)?n.port="80":/^(http|ws)s$/.test(n.protocol)&&(n.port="443")),n.path=n.path||"/";var r=n.host.indexOf(":")!==-1,s=r?"["+n.host+"]":n.host;return n.id=n.protocol+"://"+s+":"+n.port,n.href=n.protocol+"://"+s+(e&&e.port===n.port?"":":"+n.port),n}var o=n(2),i=n(3)("socket.io-client:url");t.exports=r},function(t,e){var n=/^(?:(?![^:@]+:[^:@\/]*@)(http|https|ws|wss):\/\/)?((?:(([^:@]*)(?::([^:@]*))?)?@)?((?:[a-f0-9]{0,4}:){2,7}[a-f0-9]{0,4}|[^:\/?#]*)(?::(\d*))?)(((\/(?:[^?#](?![^?#\/]*\.[^?#\/.]+(?:[?#]|$)))*\/?)?([^?#\/]*))(?:\?([^#]*))?(?:#(.*))?)/,r=["source","protocol","authority","userInfo","user","password","host","port","relative","path","directory","file","query","anchor"];t.exports=function(t){var e=t,o=t.indexOf("["),i=t.indexOf("]");o!=-1&&i!=-1&&(t=t.substring(0,o)+t.substring(o,i).replace(/:/g,";")+t.substring(i,t.length));for(var s=n.exec(t||""),a={},c=14;c--;)a[r[c]]=s[c]||"";return o!=-1&&i!=-1&&(a.source=e,a.host=a.host.substring(1,a.host.length-1).replace(/;/g,":"),a.authority=a.authority.replace("[","").replace("]","").replace(/;/g,":"),a.ipv6uri=!0),a}},function(t,e,n){(function(r){"use strict";function o(){return!("undefined"==typeof window||!window.process||"renderer"!==window.process.type&&!window.process.__nwjs)||("undefined"==typeof navigator||!navigator.userAgent||!navigator.userAgent.toLowerCase().match(/(edge|trident)\/(\d+)/))&&("undefined"!=typeof document&&document.documentElement&&document.documentElement.style&&document.documentElement.style.WebkitAppearance||"undefined"!=typeof window&&window.console&&(window.console.firebug||window.console.exception&&window.console.table)||"undefined"!=typeof navigator&&navigator.userAgent&&navigator.userAgent.toLowerCase().match(/firefox\/(\d+)/)&&parseInt(RegExp.$1,10)>=31||"undefined"!=typeof navigator&&navigator.userAgent&&navigator.userAgent.toLowerCase().match(/applewebkit\/(\d+)/))}function i(e){if(e[0]=(this.useColors?"%c":"")+this.namespace+(this.useColors?" %c":" ")+e[0]+(this.useColors?"%c ":" ")+"+"+t.exports.humanize(this.diff),this.useColors){var n="color: "+this.color;e.splice(1,0,n,"color: inherit");var r=0,o=0;e[0].replace(/%[a-zA-Z%]/g,function(t){"%%"!==t&&(r++,"%c"===t&&(o=r))}),e.splice(o,0,n)}}function s(){var t;return"object"===("undefined"==typeof console?"undefined":p(console))&&console.log&&(t=console).log.apply(t,arguments)}function a(t){try{t?e.storage.setItem("debug",t):e.storage.removeItem("debug")}catch(n){}}function c(){var t=void 0;try{t=e.storage.getItem("debug")}catch(n){}return!t&&"undefined"!=typeof r&&"env"in r&&(t=r.env.DEBUG),t}function u(){try{return localStorage}catch(t){}}var p="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t};e.log=s,e.formatArgs=i,e.save=a,e.load=c,e.useColors=o,e.storage=u(),e.colors=["#0000CC","#0000FF","#0033CC","#0033FF","#0066CC","#0066FF","#0099CC","#0099FF","#00CC00","#00CC33","#00CC66","#00CC99","#00CCCC","#00CCFF","#3300CC","#3300FF","#3333CC","#3333FF","#3366CC","#3366FF","#3399CC","#3399FF","#33CC00","#33CC33","#33CC66","#33CC99","#33CCCC","#33CCFF","#6600CC","#6600FF","#6633CC","#6633FF","#66CC00","#66CC33","#9900CC","#9900FF","#9933CC","#9933FF","#99CC00","#99CC33","#CC0000","#CC0033","#CC0066","#CC0099","#CC00CC","#CC00FF","#CC3300","#CC3333","#CC3366","#CC3399","#CC33CC","#CC33FF","#CC6600","#CC6633","#CC9900","#CC9933","#CCCC00","#CCCC33","#FF0000","#FF0033","#FF0066","#FF0099","#FF00CC","#FF00FF","#FF3300","#FF3333","#FF3366","#FF3399","#FF33CC","#FF33FF","#FF6600","#FF6633","#FF9900","#FF9933","#FFCC00","#FFCC33"],t.exports=n(5)(e);var h=t.exports.formatters;h.j=function(t){try{return JSON.stringify(t)}catch(e){return"[UnexpectedJSONParseError]: "+e.message}}}).call(e,n(4))},function(t,e){function n(){throw new Error("setTimeout has not been defined")}function r(){throw new Error("clearTimeout has not been defined")}function o(t){if(p===setTimeout)return setTimeout(t,0);if((p===n||!p)&&setTimeout)return p=setTimeout,setTimeout(t,0);try{return p(t,0)}catch(e){try{return p.call(null,t,0)}catch(e){return p.call(this,t,0)}}}function i(t){if(h===clearTimeout)return clearTimeout(t);if((h===r||!h)&&clearTimeout)return h=clearTimeout,clearTimeout(t);try{return h(t)}catch(e){try{return h.call(null,t)}catch(e){return h.call(this,t)}}}function s(){y&&l&&(y=!1,l.length?d=l.concat(d):m=-1,d.length&&a())}function a(){if(!y){var t=o(s);y=!0;for(var e=d.length;e;){for(l=d,d=[];++m1)for(var n=1;n100)){var e=/^(-?(?:\d+)?\.?\d+) *(milliseconds?|msecs?|ms|seconds?|secs?|s|minutes?|mins?|m|hours?|hrs?|h|days?|d|weeks?|w|years?|yrs?|y)?$/i.exec(t);if(e){var n=parseFloat(e[1]),r=(e[2]||"ms").toLowerCase();switch(r){case"years":case"year":case"yrs":case"yr":case"y":return n*h;case"weeks":case"week":case"w":return n*p;case"days":case"day":case"d":return n*u;case"hours":case"hour":case"hrs":case"hr":case"h":return n*c;case"minutes":case"minute":case"mins":case"min":case"m":return n*a;case"seconds":case"second":case"secs":case"sec":case"s":return n*s;case"milliseconds":case"millisecond":case"msecs":case"msec":case"ms":return n;default:return}}}}function r(t){var e=Math.abs(t);return e>=u?Math.round(t/u)+"d":e>=c?Math.round(t/c)+"h":e>=a?Math.round(t/a)+"m":e>=s?Math.round(t/s)+"s":t+"ms"}function o(t){var e=Math.abs(t);return e>=u?i(t,e,u,"day"):e>=c?i(t,e,c,"hour"):e>=a?i(t,e,a,"minute"):e>=s?i(t,e,s,"second"):t+" ms"}function i(t,e,n,r){var o=e>=1.5*n;return Math.round(t/n)+" "+r+(o?"s":"")}var s=1e3,a=60*s,c=60*a,u=24*c,p=7*u,h=365.25*u;t.exports=function(t,e){e=e||{};var i=typeof t;if("string"===i&&t.length>0)return n(t);if("number"===i&&isFinite(t))return e["long"]?o(t):r(t);throw new Error("val is not a non-empty string or a valid number. val="+JSON.stringify(t))}},function(t,e,n){function r(){}function o(t){var n=""+t.type;if(e.BINARY_EVENT!==t.type&&e.BINARY_ACK!==t.type||(n+=t.attachments+"-"),t.nsp&&"/"!==t.nsp&&(n+=t.nsp+","),null!=t.id&&(n+=t.id),null!=t.data){var r=i(t.data);if(r===!1)return g;n+=r}return f("encoded %j as %s",t,n),n}function i(t){try{return JSON.stringify(t)}catch(e){return!1}}function s(t,e){function n(t){var n=d.deconstructPacket(t),r=o(n.packet),i=n.buffers;i.unshift(r),e(i)}d.removeBlobs(t,n)}function a(){this.reconstructor=null}function c(t){var n=0,r={type:Number(t.charAt(0))};if(null==e.types[r.type])return h("unknown packet type "+r.type);if(e.BINARY_EVENT===r.type||e.BINARY_ACK===r.type){for(var o="";"-"!==t.charAt(++n)&&(o+=t.charAt(n),n!=t.length););if(o!=Number(o)||"-"!==t.charAt(n))throw new Error("Illegal attachments");r.attachments=Number(o)}if("/"===t.charAt(n+1))for(r.nsp="";++n;){var i=t.charAt(n);if(","===i)break;if(r.nsp+=i,n===t.length)break}else r.nsp="/";var s=t.charAt(n+1);if(""!==s&&Number(s)==s){for(r.id="";++n;){var i=t.charAt(n);if(null==i||Number(i)!=i){--n;break}if(r.id+=t.charAt(n),n===t.length)break}r.id=Number(r.id)}if(t.charAt(++n)){var a=u(t.substr(n)),c=a!==!1&&(r.type===e.ERROR||y(a));if(!c)return h("invalid payload");r.data=a}return f("decoded %s as %j",t,r),r}function u(t){try{return JSON.parse(t)}catch(e){return!1}}function p(t){this.reconPack=t,this.buffers=[]}function h(t){return{type:e.ERROR,data:"parser error: "+t}}var f=n(8)("socket.io-parser"),l=n(11),d=n(12),y=n(13),m=n(14);e.protocol=4,e.types=["CONNECT","DISCONNECT","EVENT","ACK","ERROR","BINARY_EVENT","BINARY_ACK"],e.CONNECT=0,e.DISCONNECT=1,e.EVENT=2,e.ACK=3,e.ERROR=4,e.BINARY_EVENT=5,e.BINARY_ACK=6,e.Encoder=r,e.Decoder=a;var g=e.ERROR+'"encode error"';r.prototype.encode=function(t,n){if(f("encoding packet %j",t),e.BINARY_EVENT===t.type||e.BINARY_ACK===t.type)s(t,n);else{var r=o(t);n([r])}},l(a.prototype),a.prototype.add=function(t){var n;if("string"==typeof t)n=c(t),e.BINARY_EVENT===n.type||e.BINARY_ACK===n.type?(this.reconstructor=new p(n),0===this.reconstructor.reconPack.attachments&&this.emit("decoded",n)):this.emit("decoded",n);else{if(!m(t)&&!t.base64)throw new Error("Unknown type: "+t);if(!this.reconstructor)throw new Error("got binary data when not reconstructing a packet");n=this.reconstructor.takeBinaryData(t),n&&(this.reconstructor=null,this.emit("decoded",n))}},a.prototype.destroy=function(){this.reconstructor&&this.reconstructor.finishedReconstruction()},p.prototype.takeBinaryData=function(t){if(this.buffers.push(t),this.buffers.length===this.reconPack.attachments){var e=d.reconstructPacket(this.reconPack,this.buffers);return this.finishedReconstruction(),e}return null},p.prototype.finishedReconstruction=function(){this.reconPack=null,this.buffers=[]}},function(t,e,n){(function(r){"use strict";function o(){return!("undefined"==typeof window||!window.process||"renderer"!==window.process.type)||("undefined"==typeof navigator||!navigator.userAgent||!navigator.userAgent.toLowerCase().match(/(edge|trident)\/(\d+)/))&&("undefined"!=typeof document&&document.documentElement&&document.documentElement.style&&document.documentElement.style.WebkitAppearance||"undefined"!=typeof window&&window.console&&(window.console.firebug||window.console.exception&&window.console.table)||"undefined"!=typeof navigator&&navigator.userAgent&&navigator.userAgent.toLowerCase().match(/firefox\/(\d+)/)&&parseInt(RegExp.$1,10)>=31||"undefined"!=typeof navigator&&navigator.userAgent&&navigator.userAgent.toLowerCase().match(/applewebkit\/(\d+)/))}function i(t){var n=this.useColors;if(t[0]=(n?"%c":"")+this.namespace+(n?" %c":" ")+t[0]+(n?"%c ":" ")+"+"+e.humanize(this.diff),n){var r="color: "+this.color;t.splice(1,0,r,"color: inherit");var o=0,i=0;t[0].replace(/%[a-zA-Z%]/g,function(t){"%%"!==t&&(o++,"%c"===t&&(i=o))}),t.splice(i,0,r)}}function s(){return"object"===("undefined"==typeof console?"undefined":p(console))&&console.log&&Function.prototype.apply.call(console.log,console,arguments)}function a(t){try{null==t?e.storage.removeItem("debug"):e.storage.debug=t}catch(n){}}function c(){var t;try{t=e.storage.debug}catch(n){}return!t&&"undefined"!=typeof r&&"env"in r&&(t=r.env.DEBUG),t}function u(){try{return window.localStorage}catch(t){}}var p="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t};e=t.exports=n(9),e.log=s,e.formatArgs=i,e.save=a,e.load=c,e.useColors=o,e.storage="undefined"!=typeof chrome&&"undefined"!=typeof chrome.storage?chrome.storage.local:u(),e.colors=["#0000CC","#0000FF","#0033CC","#0033FF","#0066CC","#0066FF","#0099CC","#0099FF","#00CC00","#00CC33","#00CC66","#00CC99","#00CCCC","#00CCFF","#3300CC","#3300FF","#3333CC","#3333FF","#3366CC","#3366FF","#3399CC","#3399FF","#33CC00","#33CC33","#33CC66","#33CC99","#33CCCC","#33CCFF","#6600CC","#6600FF","#6633CC","#6633FF","#66CC00","#66CC33","#9900CC","#9900FF","#9933CC","#9933FF","#99CC00","#99CC33","#CC0000","#CC0033","#CC0066","#CC0099","#CC00CC","#CC00FF","#CC3300","#CC3333","#CC3366","#CC3399","#CC33CC","#CC33FF","#CC6600","#CC6633","#CC9900","#CC9933","#CCCC00","#CCCC33","#FF0000","#FF0033","#FF0066","#FF0099","#FF00CC","#FF00FF","#FF3300","#FF3333","#FF3366","#FF3399","#FF33CC","#FF33FF","#FF6600","#FF6633","#FF9900","#FF9933","#FFCC00","#FFCC33"],e.formatters.j=function(t){try{return JSON.stringify(t)}catch(e){return"[UnexpectedJSONParseError]: "+e.message}},e.enable(c())}).call(e,n(4))},function(t,e,n){"use strict";function r(t){var n,r=0;for(n in t)r=(r<<5)-r+t.charCodeAt(n),r|=0;return e.colors[Math.abs(r)%e.colors.length]}function o(t){function n(){if(n.enabled){var t=n,r=+new Date,i=r-(o||r);t.diff=i,t.prev=o,t.curr=r,o=r;for(var s=new Array(arguments.length),a=0;a100)){var e=/^((?:\d+)?\.?\d+) *(milliseconds?|msecs?|ms|seconds?|secs?|s|minutes?|mins?|m|hours?|hrs?|h|days?|d|years?|yrs?|y)?$/i.exec(t);if(e){var n=parseFloat(e[1]),r=(e[2]||"ms").toLowerCase();switch(r){case"years":case"year":case"yrs":case"yr":case"y":return n*p;case"days":case"day":case"d":return n*u;case"hours":case"hour":case"hrs":case"hr":case"h":return n*c;case"minutes":case"minute":case"mins":case"min":case"m":return n*a;case"seconds":case"second":case"secs":case"sec":case"s":return n*s;case"milliseconds":case"millisecond":case"msecs":case"msec":case"ms":return n;default:return}}}}function r(t){return t>=u?Math.round(t/u)+"d":t>=c?Math.round(t/c)+"h":t>=a?Math.round(t/a)+"m":t>=s?Math.round(t/s)+"s":t+"ms"}function o(t){return i(t,u,"day")||i(t,c,"hour")||i(t,a,"minute")||i(t,s,"second")||t+" ms"}function i(t,e,n){if(!(t0)return n(t);if("number"===i&&isNaN(t)===!1)return e["long"]?o(t):r(t);throw new Error("val is not a non-empty string or a valid number. val="+JSON.stringify(t))}},function(t,e,n){function r(t){if(t)return o(t)}function o(t){for(var e in r.prototype)t[e]=r.prototype[e];return t}t.exports=r,r.prototype.on=r.prototype.addEventListener=function(t,e){return this._callbacks=this._callbacks||{},(this._callbacks["$"+t]=this._callbacks["$"+t]||[]).push(e),this},r.prototype.once=function(t,e){function n(){this.off(t,n),e.apply(this,arguments)}return n.fn=e,this.on(t,n),this},r.prototype.off=r.prototype.removeListener=r.prototype.removeAllListeners=r.prototype.removeEventListener=function(t,e){if(this._callbacks=this._callbacks||{},0==arguments.length)return this._callbacks={},this;var n=this._callbacks["$"+t];if(!n)return this;if(1==arguments.length)return delete this._callbacks["$"+t],this;for(var r,o=0;o0&&!this.encoding){var t=this.packetBuffer.shift();this.packet(t)}},r.prototype.cleanup=function(){p("cleanup");for(var t=this.subs.length,e=0;e=this._reconnectionAttempts)p("reconnect failed"),this.backoff.reset(),this.emitAll("reconnect_failed"),this.reconnecting=!1;else{var e=this.backoff.duration();p("will wait %dms before reconnect attempt",e),this.reconnecting=!0;var n=setTimeout(function(){t.skipReconnect||(p("attempting reconnect"),t.emitAll("reconnect_attempt",t.backoff.attempts),t.emitAll("reconnecting",t.backoff.attempts),t.skipReconnect||t.open(function(e){e?(p("reconnect attempt error"),t.reconnecting=!1,t.reconnect(),t.emitAll("reconnect_error",e.data)):(p("reconnect success"),t.onreconnect())}))},e);this.subs.push({destroy:function(){clearTimeout(n)}})}},r.prototype.onreconnect=function(){var t=this.backoff.attempts;this.reconnecting=!1,this.backoff.reset(),this.updateSocketIds(),this.emitAll("reconnect",t)}},function(t,e,n){t.exports=n(17),t.exports.parser=n(24)},function(t,e,n){function r(t,e){return this instanceof r?(e=e||{},t&&"object"==typeof t&&(e=t,t=null),t?(t=p(t),e.hostname=t.host,e.secure="https"===t.protocol||"wss"===t.protocol,e.port=t.port,t.query&&(e.query=t.query)):e.host&&(e.hostname=p(e.host).host),this.secure=null!=e.secure?e.secure:"undefined"!=typeof location&&"https:"===location.protocol,e.hostname&&!e.port&&(e.port=this.secure?"443":"80"),this.agent=e.agent||!1,this.hostname=e.hostname||("undefined"!=typeof location?location.hostname:"localhost"),this.port=e.port||("undefined"!=typeof location&&location.port?location.port:this.secure?443:80),this.query=e.query||{},"string"==typeof this.query&&(this.query=h.decode(this.query)),this.upgrade=!1!==e.upgrade,this.path=(e.path||"/engine.io").replace(/\/$/,"")+"/",this.forceJSONP=!!e.forceJSONP,this.jsonp=!1!==e.jsonp,this.forceBase64=!!e.forceBase64,this.enablesXDR=!!e.enablesXDR,this.withCredentials=!1!==e.withCredentials,this.timestampParam=e.timestampParam||"t",this.timestampRequests=e.timestampRequests,this.transports=e.transports||["polling","websocket"],this.transportOptions=e.transportOptions||{},this.readyState="",this.writeBuffer=[],this.prevBufferLen=0,this.policyPort=e.policyPort||843,this.rememberUpgrade=e.rememberUpgrade||!1,this.binaryType=null,this.onlyBinaryUpgrades=e.onlyBinaryUpgrades,this.perMessageDeflate=!1!==e.perMessageDeflate&&(e.perMessageDeflate||{}),!0===this.perMessageDeflate&&(this.perMessageDeflate={}),this.perMessageDeflate&&null==this.perMessageDeflate.threshold&&(this.perMessageDeflate.threshold=1024),this.pfx=e.pfx||null,this.key=e.key||null,this.passphrase=e.passphrase||null,this.cert=e.cert||null,this.ca=e.ca||null,this.ciphers=e.ciphers||null,this.rejectUnauthorized=void 0===e.rejectUnauthorized||e.rejectUnauthorized,this.forceNode=!!e.forceNode,this.isReactNative="undefined"!=typeof navigator&&"string"==typeof navigator.product&&"reactnative"===navigator.product.toLowerCase(),("undefined"==typeof self||this.isReactNative)&&(e.extraHeaders&&Object.keys(e.extraHeaders).length>0&&(this.extraHeaders=e.extraHeaders),e.localAddress&&(this.localAddress=e.localAddress)),this.id=null,this.upgrades=null,this.pingInterval=null,this.pingTimeout=null,this.pingIntervalTimer=null,this.pingTimeoutTimer=null,void this.open()):new r(t,e)}function o(t){var e={};for(var n in t)t.hasOwnProperty(n)&&(e[n]=t[n]);return e}var i=n(18),s=n(11),a=n(3)("engine.io-client:socket"),c=n(38),u=n(24),p=n(2),h=n(32);t.exports=r,r.priorWebsocketSuccess=!1,s(r.prototype),r.protocol=u.protocol,r.Socket=r,r.Transport=n(23),r.transports=n(18),r.parser=n(24),r.prototype.createTransport=function(t){a('creating transport "%s"',t);var e=o(this.query);e.EIO=u.protocol,e.transport=t;var n=this.transportOptions[t]||{};this.id&&(e.sid=this.id);var r=new i[t]({query:e,socket:this,agent:n.agent||this.agent,hostname:n.hostname||this.hostname,port:n.port||this.port,secure:n.secure||this.secure,path:n.path||this.path,forceJSONP:n.forceJSONP||this.forceJSONP,jsonp:n.jsonp||this.jsonp,forceBase64:n.forceBase64||this.forceBase64,enablesXDR:n.enablesXDR||this.enablesXDR,withCredentials:n.withCredentials||this.withCredentials,timestampRequests:n.timestampRequests||this.timestampRequests,timestampParam:n.timestampParam||this.timestampParam,policyPort:n.policyPort||this.policyPort,pfx:n.pfx||this.pfx,key:n.key||this.key,passphrase:n.passphrase||this.passphrase,cert:n.cert||this.cert,ca:n.ca||this.ca,ciphers:n.ciphers||this.ciphers,rejectUnauthorized:n.rejectUnauthorized||this.rejectUnauthorized,perMessageDeflate:n.perMessageDeflate||this.perMessageDeflate,extraHeaders:n.extraHeaders||this.extraHeaders,forceNode:n.forceNode||this.forceNode,localAddress:n.localAddress||this.localAddress,requestTimeout:n.requestTimeout||this.requestTimeout,protocols:n.protocols||void 0,isReactNative:this.isReactNative});return r},r.prototype.open=function(){var t;if(this.rememberUpgrade&&r.priorWebsocketSuccess&&this.transports.indexOf("websocket")!==-1)t="websocket";else{ +if(0===this.transports.length){var e=this;return void setTimeout(function(){e.emit("error","No transports available")},0)}t=this.transports[0]}this.readyState="opening";try{t=this.createTransport(t)}catch(n){return this.transports.shift(),void this.open()}t.open(),this.setTransport(t)},r.prototype.setTransport=function(t){a("setting transport %s",t.name);var e=this;this.transport&&(a("clearing existing transport %s",this.transport.name),this.transport.removeAllListeners()),this.transport=t,t.on("drain",function(){e.onDrain()}).on("packet",function(t){e.onPacket(t)}).on("error",function(t){e.onError(t)}).on("close",function(){e.onClose("transport close")})},r.prototype.probe=function(t){function e(){if(f.onlyBinaryUpgrades){var e=!this.supportsBinary&&f.transport.supportsBinary;h=h||e}h||(a('probe transport "%s" opened',t),p.send([{type:"ping",data:"probe"}]),p.once("packet",function(e){if(!h)if("pong"===e.type&&"probe"===e.data){if(a('probe transport "%s" pong',t),f.upgrading=!0,f.emit("upgrading",p),!p)return;r.priorWebsocketSuccess="websocket"===p.name,a('pausing current transport "%s"',f.transport.name),f.transport.pause(function(){h||"closed"!==f.readyState&&(a("changing transport and sending upgrade packet"),u(),f.setTransport(p),p.send([{type:"upgrade"}]),f.emit("upgrade",p),p=null,f.upgrading=!1,f.flush())})}else{a('probe transport "%s" failed',t);var n=new Error("probe error");n.transport=p.name,f.emit("upgradeError",n)}}))}function n(){h||(h=!0,u(),p.close(),p=null)}function o(e){var r=new Error("probe error: "+e);r.transport=p.name,n(),a('probe transport "%s" failed because of error: %s',t,e),f.emit("upgradeError",r)}function i(){o("transport closed")}function s(){o("socket closed")}function c(t){p&&t.name!==p.name&&(a('"%s" works - aborting "%s"',t.name,p.name),n())}function u(){p.removeListener("open",e),p.removeListener("error",o),p.removeListener("close",i),f.removeListener("close",s),f.removeListener("upgrading",c)}a('probing transport "%s"',t);var p=this.createTransport(t,{probe:1}),h=!1,f=this;r.priorWebsocketSuccess=!1,p.once("open",e),p.once("error",o),p.once("close",i),this.once("close",s),this.once("upgrading",c),p.open()},r.prototype.onOpen=function(){if(a("socket open"),this.readyState="open",r.priorWebsocketSuccess="websocket"===this.transport.name,this.emit("open"),this.flush(),"open"===this.readyState&&this.upgrade&&this.transport.pause){a("starting upgrade probes");for(var t=0,e=this.upgrades.length;t1?{type:b[o],data:t.substring(1)}:{type:b[o]}:C}var i=new Uint8Array(t),o=i[0],s=f(t,1);return w&&"blob"===n&&(s=new w([s])),{type:b[o],data:s}},e.decodeBase64Packet=function(t,e){var n=b[t.charAt(0)];if(!u)return{type:n,data:{base64:!0,data:t.substr(1)}};var r=u.decode(t.substr(1));return"blob"===e&&w&&(r=new w([r])),{type:n,data:r}},e.encodePayload=function(t,n,r){function o(t){return t.length+":"+t}function i(t,r){e.encodePacket(t,!!s&&n,!1,function(t){r(null,o(t))})}"function"==typeof n&&(r=n,n=null);var s=h(t);return n&&s?w&&!g?e.encodePayloadAsBlob(t,r):e.encodePayloadAsArrayBuffer(t,r):t.length?void c(t,i,function(t,e){return r(e.join(""))}):r("0:")},e.decodePayload=function(t,n,r){if("string"!=typeof t)return e.decodePayloadAsBinary(t,n,r);"function"==typeof n&&(r=n,n=null);var o;if(""===t)return r(C,0,1);for(var i,s,a="",c=0,u=t.length;c0;){for(var s=new Uint8Array(o),a=0===s[0],c="",u=1;255!==s[u];u++){if(c.length>310)return r(C,0,1);c+=s[u]}o=f(o,2+c.length),c=parseInt(c);var p=f(o,0,c);if(a)try{p=String.fromCharCode.apply(null,new Uint8Array(p))}catch(h){var l=new Uint8Array(p);p="";for(var u=0;ur&&(n=r),e>=r||e>=n||0===r)return new ArrayBuffer(0);for(var o=new Uint8Array(t),i=new Uint8Array(n-e),s=e,a=0;s=55296&&e<=56319&&o65535&&(e-=65536,o+=d(e>>>10&1023|55296),e=56320|1023&e),o+=d(e);return o}function o(t,e){if(t>=55296&&t<=57343){if(e)throw Error("Lone surrogate U+"+t.toString(16).toUpperCase()+" is not a scalar value");return!1}return!0}function i(t,e){return d(t>>e&63|128)}function s(t,e){if(0==(4294967168&t))return d(t);var n="";return 0==(4294965248&t)?n=d(t>>6&31|192):0==(4294901760&t)?(o(t,e)||(t=65533),n=d(t>>12&15|224),n+=i(t,6)):0==(4292870144&t)&&(n=d(t>>18&7|240),n+=i(t,12),n+=i(t,6)),n+=d(63&t|128)}function a(t,e){e=e||{};for(var r,o=!1!==e.strict,i=n(t),a=i.length,c=-1,u="";++c=f)throw Error("Invalid byte index");var t=255&h[l];if(l++,128==(192&t))return 63&t;throw Error("Invalid continuation byte")}function u(t){var e,n,r,i,s;if(l>f)throw Error("Invalid byte index");if(l==f)return!1;if(e=255&h[l],l++,0==(128&e))return e;if(192==(224&e)){if(n=c(),s=(31&e)<<6|n,s>=128)return s;throw Error("Invalid continuation byte")}if(224==(240&e)){if(n=c(),r=c(),s=(15&e)<<12|n<<6|r,s>=2048)return o(s,t)?s:65533;throw Error("Invalid continuation byte")}if(240==(248&e)&&(n=c(),r=c(),i=c(),s=(7&e)<<18|n<<12|r<<6|i,s>=65536&&s<=1114111))return s;throw Error("Invalid UTF-8 detected")}function p(t,e){e=e||{};var o=!1!==e.strict;h=n(t),f=h.length,l=0;for(var i,s=[];(i=u(o))!==!1;)s.push(i);return r(s)}/*! https://mths.be/utf8js v2.1.2 by @mathias */ +var h,f,l,d=String.fromCharCode;t.exports={version:"2.1.2",encode:a,decode:p}},function(t,e){!function(){"use strict";for(var t="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",n=new Uint8Array(256),r=0;r>2],i+=t[(3&r[n])<<4|r[n+1]>>4],i+=t[(15&r[n+1])<<2|r[n+2]>>6],i+=t[63&r[n+2]];return o%3===2?i=i.substring(0,i.length-1)+"=":o%3===1&&(i=i.substring(0,i.length-2)+"=="),i},e.decode=function(t){var e,r,o,i,s,a=.75*t.length,c=t.length,u=0;"="===t[t.length-1]&&(a--,"="===t[t.length-2]&&a--);var p=new ArrayBuffer(a),h=new Uint8Array(p);for(e=0;e>4,h[u++]=(15&o)<<4|i>>2,h[u++]=(3&i)<<6|63&s;return p}}()},function(t,e){function n(t){return t.map(function(t){if(t.buffer instanceof ArrayBuffer){var e=t.buffer;if(t.byteLength!==e.byteLength){var n=new Uint8Array(t.byteLength);n.set(new Uint8Array(e,t.byteOffset,t.byteLength)),e=n.buffer}return e}return t})}function r(t,e){e=e||{};var r=new i;return n(t).forEach(function(t){r.append(t)}),e.type?r.getBlob(e.type):r.getBlob()}function o(t,e){return new Blob(n(t),e||{})}var i="undefined"!=typeof i?i:"undefined"!=typeof WebKitBlobBuilder?WebKitBlobBuilder:"undefined"!=typeof MSBlobBuilder?MSBlobBuilder:"undefined"!=typeof MozBlobBuilder&&MozBlobBuilder,s=function(){try{var t=new Blob(["hi"]);return 2===t.size}catch(e){return!1}}(),a=s&&function(){try{var t=new Blob([new Uint8Array([1,2])]);return 2===t.size}catch(e){return!1}}(),c=i&&i.prototype.append&&i.prototype.getBlob;"undefined"!=typeof Blob&&(r.prototype=Blob.prototype,o.prototype=Blob.prototype),t.exports=function(){return s?a?Blob:o:c?r:void 0}()},function(t,e){e.encode=function(t){var e="";for(var n in t)t.hasOwnProperty(n)&&(e.length&&(e+="&"),e+=encodeURIComponent(n)+"="+encodeURIComponent(t[n]));return e},e.decode=function(t){for(var e={},n=t.split("&"),r=0,o=n.length;r0);return e}function r(t){var e=0;for(p=0;p';i=document.createElement(e)}catch(t){i=document.createElement("iframe"),i.name=o.iframeId,i.src="javascript:0"}i.id=o.iframeId,o.form.appendChild(i),o.iframe=i}var o=this;if(!this.form){var i,s=document.createElement("form"),a=document.createElement("textarea"),c=this.iframeId="eio_iframe_"+this.index;s.className="socketio",s.style.position="absolute",s.style.top="-1000px",s.style.left="-1000px",s.target=c,s.method="POST",s.setAttribute("accept-charset","utf-8"),a.name="d",s.appendChild(a),document.body.appendChild(s),this.form=s,this.area=a}this.form.action=this.uri(),r(),t=t.replace(p,"\\\n"),this.area.value=t.replace(u,"\\n");try{this.form.submit()}catch(h){}this.iframe.attachEvent?this.iframe.onreadystatechange=function(){"complete"===o.iframe.readyState&&n()}:this.iframe.onload=n}}).call(e,function(){return this}())},function(t,e,n){function r(t){var e=t&&t.forceBase64;e&&(this.supportsBinary=!1),this.perMessageDeflate=t.perMessageDeflate,this.usingBrowserWebSocket=o&&!t.forceNode,this.protocols=t.protocols,this.usingBrowserWebSocket||(l=i),s.call(this,t)}var o,i,s=n(23),a=n(24),c=n(32),u=n(33),p=n(34),h=n(3)("engine.io-client:websocket");if("undefined"!=typeof WebSocket?o=WebSocket:"undefined"!=typeof self&&(o=self.WebSocket||self.MozWebSocket),"undefined"==typeof window)try{i=n(37)}catch(f){}var l=o||i;t.exports=r,u(r,s),r.prototype.name="websocket",r.prototype.supportsBinary=!0,r.prototype.doOpen=function(){if(this.check()){var t=this.uri(),e=this.protocols,n={agent:this.agent,perMessageDeflate:this.perMessageDeflate};n.pfx=this.pfx,n.key=this.key,n.passphrase=this.passphrase,n.cert=this.cert,n.ca=this.ca,n.ciphers=this.ciphers,n.rejectUnauthorized=this.rejectUnauthorized,this.extraHeaders&&(n.headers=this.extraHeaders),this.localAddress&&(n.localAddress=this.localAddress);try{this.ws=this.usingBrowserWebSocket&&!this.isReactNative?e?new l(t,e):new l(t):new l(t,e,n)}catch(r){return this.emit("error",r)}void 0===this.ws.binaryType&&(this.supportsBinary=!1),this.ws.supports&&this.ws.supports.binary?(this.supportsBinary=!0,this.ws.binaryType="nodebuffer"):this.ws.binaryType="arraybuffer",this.addEventListeners()}},r.prototype.addEventListeners=function(){var t=this;this.ws.onopen=function(){t.onOpen()},this.ws.onclose=function(){t.onClose()},this.ws.onmessage=function(e){t.onData(e.data)},this.ws.onerror=function(e){t.onError("websocket error",e)}},r.prototype.write=function(t){function e(){n.emit("flush"),setTimeout(function(){n.writable=!0,n.emit("drain")},0)}var n=this;this.writable=!1;for(var r=t.length,o=0,i=r;o0&&t.jitter<=1?t.jitter:0,this.attempts=0}t.exports=n,n.prototype.duration=function(){var t=this.ms*Math.pow(this.factor,this.attempts++);if(this.jitter){var e=Math.random(),n=Math.floor(e*this.jitter*t);t=0==(1&Math.floor(10*e))?t-n:t+n}return 0|Math.min(t,this.max)},n.prototype.reset=function(){this.attempts=0},n.prototype.setMin=function(t){this.ms=t},n.prototype.setMax=function(t){this.max=t},n.prototype.setJitter=function(t){this.jitter=t}}])}); +//# sourceMappingURL=socket.io.js.map \ No newline at end of file diff --git a/schedule/slides/20-boosting-speaker.html b/schedule/slides/20-boosting-speaker.html new file mode 100644 index 0000000..c82b596 --- /dev/null +++ b/schedule/slides/20-boosting-speaker.html @@ -0,0 +1,1084 @@ + + + + + + + + + + + + + + UBC Stat406 2023W – boosting + + + + + + + + + + + + + + + + + + +
+
+ + +
+

20 Boosting

+

Stat 406

+

Daniel J. McDonald

+

Last modified – 02 November 2023

+

\[ +\DeclareMathOperator*{\argmin}{argmin} +\DeclareMathOperator*{\argmax}{argmax} +\DeclareMathOperator*{\minimize}{minimize} +\DeclareMathOperator*{\maximize}{maximize} +\DeclareMathOperator*{\find}{find} +\DeclareMathOperator{\st}{subject\,\,to} +\newcommand{\E}{E} +\newcommand{\Expect}[1]{\E\left[ #1 \right]} +\newcommand{\Var}[1]{\mathrm{Var}\left[ #1 \right]} +\newcommand{\Cov}[2]{\mathrm{Cov}\left[#1,\ #2\right]} +\newcommand{\given}{\ \vert\ } +\newcommand{\X}{\mathbf{X}} +\newcommand{\x}{\mathbf{x}} +\newcommand{\y}{\mathbf{y}} +\newcommand{\P}{\mathcal{P}} +\newcommand{\R}{\mathbb{R}} +\newcommand{\norm}[1]{\left\lVert #1 \right\rVert} +\newcommand{\snorm}[1]{\lVert #1 \rVert} +\newcommand{\tr}[1]{\mbox{tr}(#1)} +\newcommand{\brt}{\widehat{\beta}^R_{s}} +\newcommand{\brl}{\widehat{\beta}^R_{\lambda}} +\newcommand{\bls}{\widehat{\beta}_{ols}} +\newcommand{\blt}{\widehat{\beta}^L_{s}} +\newcommand{\bll}{\widehat{\beta}^L_{\lambda}} +\newcommand{\U}{\mathbf{U}} +\newcommand{\D}{\mathbf{D}} +\newcommand{\V}{\mathbf{V}} +\]

+
+
+

Last time

+

We learned about bagging, for averaging low-bias / high-variance estimators.

+

Today, we examine it’s opposite: Boosting.

+

Boosting also combines estimators, but it combines high-bias / low-variance estimators.

+

Boosting has a number of flavours. And if you Google descriptions, most are wrong.

+

For a deep (and accurate) treatment, see [ESL] Chapter 10

+
+

We’ll discuss 2 flavours: AdaBoost and Gradient Boosting

+

Neither requires a tree, but that’s the typical usage.

+

Boosting needs a “weak learner”, so small trees (stumps) are natural.

+
+
+
+

AdaBoost intuition (for classification)

+

At each iteration, we weight the observations.

+

Observations that are currently misclassified, get higher weights.

+

So on the next iteration, we’ll try harder to correctly classify our mistakes.

+

The number of iterations must be chosen.

+
+
+

AdaBoost (Freund and Schapire, generic)

+

Let \(G(x, \theta)\) be any weak learner

+

⛭ imagine a tree with one split: then \(\theta=\) (feature, split point)

+

Algorithm (AdaBoost) 🛠️

+
    +
  • Set observation weights \(w_i=1/n\).
  • +
  • Until we quit ( \(m<M\) iterations ) +
      +
    1. Estimate the classifier \(G(x,\theta_m)\) using weights \(w_i\)
    2. +
    3. Calculate it’s weighted error \(\textrm{err}_m = \sum_{i=1}^n w_i I(y_i \neq G(x_i, \theta_m)) / \sum w_i\)
    4. +
    5. Set \(\alpha_m = \log((1-\textrm{err}_m)/\text{err}_m)\)
    6. +
    7. Update \(w_i \leftarrow w_i \exp(\alpha_m I(y_i \neq G(x_i,\theta_m)))\)
    8. +
  • +
  • Final classifier is \(G(x) = \textrm{sign}\left( \sum_{m=1}^M \alpha_m G(x, \theta_m)\right)\)
  • +
+
+
+

Using mobility data again

+
+
+Code +
library(kableExtra)
+library(randomForest)
+mob <- Stat406::mobility |>
+  mutate(mobile = as.factor(Mobility > .1)) |>
+  select(-ID, -Name, -Mobility, -State) |>
+  drop_na()
+n <- nrow(mob)
+trainidx <- sample.int(n, floor(n * .75))
+testidx <- setdiff(1:n, trainidx)
+train <- mob[trainidx, ]
+test <- mob[testidx, ]
+rf <- randomForest(mobile ~ ., data = train)
+bag <- randomForest(mobile ~ ., data = train, mtry = ncol(mob) - 1)
+preds <- tibble(truth = test$mobile, rf = predict(rf, test), bag = predict(bag, test))
+
+
+
+
+
library(gbm)
+train_boost <- train |>
+  mutate(mobile = as.integer(mobile) - 1)
+# needs {0, 1} responses
+test_boost <- test |>
+  mutate(mobile = as.integer(mobile) - 1)
+adab <- gbm(
+  mobile ~ .,
+  data = train_boost,
+  n.trees = 500,
+  distribution = "adaboost"
+)
+preds$adab <- as.numeric(
+  predict(adab, test_boost) > 0
+)
+par(mar = c(5, 11, 0, 1))
+s <- summary(adab, las = 1)
+
+
+
+
+

+
+
+
+
+
+
+
+

Forward stagewise additive modeling (FSAM, completely generic)

+

Algorithm 🛠️

+
    +
  • Set initial predictor \(f_0(x)=0\)
  • +
  • Until we quit ( \(m<M\) iterations ) +
      +
    1. Compute \((\beta_m, \theta_m) = \argmin_{\beta, \theta} \sum_{i=1}^n L\left(y_i,\ f_{m-1}(x_i) + \beta G(x_i,\ \theta)\right)\)
    2. +
    3. Set \(f_m(x) = f_{m-1}(x) + \beta_m G(x,\ \theta_m)\)
    4. +
  • +
  • Final classifier is \(G(x, \theta_M) = \textrm{sign}\left( f_M(x) \right)\)
  • +
+

Here, \(L\) is a loss function that measures prediction accuracy

+
+
    +
  • If (1) \(L(y,\ f(x))= \exp(-y f(x))\), (2) \(G\) is a classifier, and WLOG \(y \in \{-1, 1\}\)
  • +
+

FSAM is equivalent to AdaBoost. Proven 5 years later (Friedman, Hastie, and Tibshirani 2000).

+
+
+
+

So what?

+

It turns out that “exponential loss” \(L(y,\ f(x))= \exp(-y f(x))\) is not very robust.

+

Here are some other loss functions for 2-class classification

+ +
+

Want losses which penalize negative margin, but not positive margins.

+

Robust means don’t over-penalize large negatives

+
+
+
+

Gradient boosting

+

In the forward stagewise algorithm, we solved a minimization and then made an update:

+

\[f_m(x) = f_{m-1}(x) + \beta_m G(x, \theta_m)\]

+

For most loss functions \(L\) / procedures \(G\) this optimization is difficult: \[\argmin_{\beta, \theta} \sum_{i=1}^n L\left(y_i,\ f_{m-1}(x_i) + \beta G(x_i, \theta)\right)\]

+

💡 Just take one gradient step toward the minimum 💡

+

\[f_m(x) = f_{m-1}(x) -\gamma_m \nabla L(y,f_{m-1}(x)) = f_{m-1}(x) +\gamma_m \left(-\nabla L(y,f_{m-1}(x))\right)\]

+

This is called Gradient boosting

+

Notice how similar the update steps look.

+
+
+

Gradient boosting

+

\[f_m(x) = f_{m-1}(x) -\gamma_m \nabla L(y,f_{m-1}(x)) = f_{m-1}(x) +\gamma_m \left(-\nabla L(y,f_{m-1}(x))\right)\]

+

Gradient boosting goes only part of the way toward the minimum at each \(m\).

+

This has two advantages:

+
    +
  1. Since we’re not fitting \(\beta, \theta\) to the data as “hard”, the learner is weaker.

  2. +
  3. This procedure is computationally much simpler.

  4. +
+

Simpler because we only require the gradient at one value, don’t have to fully optimize.

+
+
+

Gradient boosting – Algorithm 🛠️

+
    +
  • Set initial predictor \(f_0(x)=\overline{\y}\)
  • +
  • Until we quit ( \(m<M\) iterations ) +
      +
    1. Compute pseudo-residuals (what is the gradient of \(L(y,f)=(y-f(x))^2\)?) \[r_i = -\frac{\partial L(y_i,f(x_i))}{\partial f(x_i)}\bigg|_{f(x_i)=f_{m-1}(x_i)}\]
    2. +
    3. Estimate weak learner, \(G(x, \theta_m)\), with the training set \(\{r_i, x_i\}\).
    4. +
    5. Find the step size \(\gamma_m = \argmin_\gamma \sum_{i=1}^n L(y_i, f_{m-1}(x_i) + \gamma G(x_i, \theta_m))\)
    6. +
    7. Set \(f_m(x) = f_{m-1}(x) + \gamma_m G(x, \theta_m)\)
    8. +
  • +
  • Final predictor is \(f_M(x)\).
  • +
+
+
+

Gradient boosting modifications

+
+
grad_boost <- gbm(mobile ~ ., data = train_boost, n.trees = 500, distribution = "bernoulli")
+
+
    +
  • Typically done with “small” trees, not stumps because of the gradient. You can specify the size. Usually 4-8 terminal nodes is recommended (more gives more interactions between predictors)

  • +
  • Usually modify the gradient step to \(f_m(x) = f_{m-1}(x) + \gamma_m \alpha G(x,\theta_m)\) with \(0<\alpha<1\). Helps to keep from fitting too hard.

  • +
  • Often combined with Bagging so that each step is fit using a bootstrap resample of the data. Gives us out-of-bag options.

  • +
  • There are many other extensions, notably XGBoost.

  • +
+
+
+

Results for mobility

+
+
+Code +
library(cowplot)
+boost_preds <- tibble(
+  adaboost = predict(adab, test_boost),
+  gbm = predict(grad_boost, test_boost),
+  truth = test$mobile
+)
+g1 <- ggplot(boost_preds, aes(adaboost, gbm, color = as.factor(truth))) +
+  geom_text(aes(label = as.integer(truth) - 1)) +
+  geom_vline(xintercept = 0) +
+  geom_hline(yintercept = 0) +
+  xlab("adaboost margin") +
+  ylab("gbm margin") +
+  theme(legend.position = "none") +
+  scale_color_manual(values = c("orange", "blue")) +
+  annotate("text",
+    x = -4, y = 5, color = red,
+    label = paste(
+      "gbm error\n",
+      round(with(boost_preds, mean((gbm > 0) != truth)), 2)
+    )
+  ) +
+  annotate("text",
+    x = 4, y = -5, color = red,
+    label = paste("adaboost error\n", round(with(boost_preds, mean((adaboost > 0) != truth)), 2))
+  )
+boost_oob <- tibble(
+  adaboost = adab$oobag.improve, gbm = grad_boost$oobag.improve,
+  ntrees = 1:500
+)
+g2 <- boost_oob %>%
+  pivot_longer(-ntrees, values_to = "OOB_Error") %>%
+  ggplot(aes(x = ntrees, y = OOB_Error, color = name)) +
+  geom_line() +
+  scale_color_manual(values = c(orange, blue)) +
+  theme(legend.title = element_blank())
+plot_grid(g1, g2, rel_widths = c(.4, .6))
+
+ +
+
+
+

Major takeaways

+
    +
  • Two flavours of Boosting

    +
      +
    1. AdaBoost (the original) and
    2. +
    3. gradient boosting (easier and more computationally friendly)
    4. +
  • +
  • The connection is “Forward stagewise additive modelling” (AdaBoost is a special case)

  • +
  • The connection reveals that AdaBoost “isn’t robust because it uses exponential loss” (squared error is even worse)

  • +
  • Gradient boosting is a computationally easier version of FSAM

  • +
  • All use weak learners (compare to Bagging)

  • +
  • Think about the Bias-Variance implications

  • +
  • You can use these for regression or classification

  • +
  • You can do this with other weak learners besides trees.

  • +
+
+
+

Next time…

+

Neural networks and deep learning, the beginning

+ + +
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/schedule/slides/20-boosting.qmd b/schedule/slides/20-boosting.qmd index 012482b..9c06418 100644 --- a/schedule/slides/20-boosting.qmd +++ b/schedule/slides/20-boosting.qmd @@ -1,6 +1,8 @@ --- lecture: "20 Boosting" -format: revealjs +format: + revealjs: + multiplex: true metadata-files: - _metadata.yml ---