-
Notifications
You must be signed in to change notification settings - Fork 374
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JavaScript bundlers, HMR and customElements.define() #829
Comments
I think you can reasonably fake it using We should probably provide a way to update the list of observed attributes for a given custom element. |
So the observed attributes are stored in the registry, right? So this would mean some kind of way to push updates to the registry for a given name? Should you only be able to update the attributes or also which callbacks are important? |
For the HMR use-case callbacks can be dealt with by defining all of them and having them delegate to the new definition. It certainly would be nice if that wasn't necessary though. For attributes this could also be addressed with a wild-card option. What are the hazards of allowing re-definition of elements? Could we add an option to We recently got a limited form of HMR working within Google by patching class MyElement {
static observedAttributes = [...];
static redefine(newClass) {
...
}
} |
Thank you for the many responses. Based on the suggestion of using a You can try it out by installing the npm package: custom-elements-hmr-polyfill The essential implementation details are: Here: and here: lifecycle function call delegation Apart from that we created an interactive example: If you clone the repo and execute In this example, fuse-box v4 does the HMR part. But exactly as @rniwa noted, the limitation is I'd suggest a function like Doing so would imply no regression and just enable a new feature safely. Alternatively One might also think about an event being thrown when this happens like |
It would be very strange & problematic to allow re-definition of the same element name. I think the least problematic solution here might be to allow wild-card ( Another somewhat exotic solution is to make use of an author-defined custom element registry since we can just create a new registry each time you need to load scripts. Both solutions are equally useful outside of this use case so it's probably worth pursing either or both. |
We explicitly did not allow |
Right, that was the past discussion due to some folks from Google objecting to it. However, I'd prefer allowing |
@rniwa Yes, allowing |
We've also polyfilled https://github.com/vegarringdal/custom-elements-hmr-polyfill Two distinct Anyway, it would be nice to see the standard supporting HMR for Web Components in a way that we wouldn't need any polyfills but in general the issue is pragmatically solved now. |
A big shout-out to @kyr0 and @vegarringdal's polyfill effort! This would've been a headache for me just months ago. So I guess there'll be no changes on the W3C front? This inconvenience is better to be addressed by the JS bundlers via polyfills, not by the standards (unless this register-only-once principle is deemed meritless) |
@Leedehai Thank you :)
However, the polyfill is not able to cover all the nifty details. We've managed to solve a lot of issues but there is still one, which we can't address: If a custom elements class definition is mutated at runtime, the (JS) developer expects the instances to mutate as well, reflecting the change. Just like JS object instances change, when you change its For example:
So, the standard doesn't define this as of now and the effect is, that we can't completely solve the issue with a polyfill. Even if I would loop thru a list of attributes defined by the HTML standard, so I would find custom added properties on instance level before the clone happens... I could still run into issues here when closure functions are involved. I really cannot solve all cases safely and it has a huge impact on performance as well (all these loops on a per instance level). After all these findings, I went back to a page refresh. And later, I switched my frameworks implementation not to use web components anymore. I use an approach on components now which is much like React in theory, but it's more efficient. For style encapsulation I use CSS modules now and I'm working on a bundler that does the magic it involves at compile-time. On the other hand, it would be nice if I could have sticked to web components. But there are so many limitations and I need so many polyfills :/ Well, and the "custom element" / component issue has been solved in user-land already for years. So if we look, for example, in direction of react... there is a defined "state" and because of this they know, in any event of a on class definition mutation, what is a state to clone and to re-apply and what isn't. I do the same in SpringType, but allow class properties to be decorated with |
The contract for define() treating constructor / constructor.prototype as dictionary input is pretty misleading, yeah — but its current behavior is important. If the element’s public API were used for live lookup, invariants which are achievable today would cease to be — and production code would break, because today it’s possible (and in certain usages, essential to) delete the dictionary-members-disguised-as-methods immediately after definition. However, if wrapping classes in an instrumentation layer in advance of registration, it seems like the instrumented wrapper callbacks could delegate to what the tool sees as the ‘current’ callback definitions, no? If redefinition were permitted, I would hope this would at least be done in a way that doesn’t take away any of the invariants which are achievable today — e.g. through an explicit |
Hi from the webpack team 👋. First of all: hats off to @kyr0 for this awesome PoC. Using Proxies sounds perfectly fine to me. In fact, that's how most HMR implementations work (like https://github.com/gaearon/react-hot-loader). Not allowing to redefine the custom element makes perfectly sense from the browser's point of view. Without having thought about all the small details, I would assume an HMR implementation to call the I also think that it's safe to assume that the developer did not do anything weird. Unlike CSS, it's impossible to do safe HMR with arbitrary JavaScript. It's ok to make HMR work in 90% cases. When in doubt, most developers will hit reload anyway. |
I think I like the idea of re-definitions, but not sure what the complexities are. Userland code would be clean and nothing in a custom element class definition would need to be HMR-specific, because on re-definition the current elements' HMR code would only need to re-define the element, outside of the class definition.
This puts a ton of complexity in the userland code, as @kyr0 describes.
Trying to understand why. Seems like it was just easier to do it that way from an implementation perspective. What are the invariants @bathos?
That's basically what I'm imagining a native redefinition feature would do. |
Thinking about stuff like The reference to the DOM element that got redefined (downgraded then re-upgraded) would remain the same and in theory this should keep parent state in tact. For example, a parent element may stick children into an element that will be redefined in the future, and may even track the to-be-redefined element in some arrays, Maps, and other places in memory. Those children of the future-redefined element will remain as children of the redefined element because it is the same reference to the same object in JS after redefinition, and all arrays, maps, etc, will continue to reference the same object. When the said element gets redefined it can re-handle any children on upgrade ( Code using jQuery, having the same reference to any redefined element, should continue to work the same, for example. There may be cases where re-upgrade causes some property values to reset, but that's ok. Maybe a bubbling |
The first set I was referring to concern storing the reaction functions rather than performing live lookups. If one deletes the callback hooks from the public interface, it can be guaranteed that any invocation of them is legitimate (e.g. The latter is that elements which instantiate elements generally have expectations about what those elements will be. Currently those can be relied on, as either definition would throw or if it succeeded you know what you’re getting. However from a backwards compat perspective, both alternative behaviors could be introduced safely if they were opt-in via the second argument. |
The concept of downgrading isn’t sound. State exists in association with objects by identity, whether as keys in weak or strong collections, as private fields, as own properties, as internal slots of platform objects, or just as bindings within closures. This sort of thing works only when adhering to certain idioms that deliberately constrain JS — it doesn’t generalize. |
I do imagine that with a redefinition feature, the engine could very well drop the references to the old hooks, and store the new ones, thus it can maintain that privacy that you want. (Maybe it'd be better to have some way to expose
That's true. Well obviously programs are going to outright break if people randomly change the types of the elements that are defined. I don't think people will have success doing that anyways. It'll just be something that everyone knows not to do. It'll be like with a type system (f.e. TypeScript): you can swap out an item for another one that can be assigned to the same base type.
That's a good idea. In production mode the application could not pass it in, to lock things in place. If someone wants to abuse it, and totally change the types of elements, I think they'll get what's coming to them. It won't be long before they realize they simply shouldn't do that. And having TypeScript in place can trigger type errors once the ambient element type definitions are mapped.
I'm imagining that it would be fine, because on downgrade the referenced object (element) remains the same. After downgrade, the element will still be the same referenced object in a WeakMap key, the same object at some position in an array, etc. It doesn't matter where the reference is, it'd be the same reference, and all those places would continue to rely on that same reference. It's similar to currently: we can get references to non-upgraded elements, manipulate them before ever calling |
Known truths about the value would be invalidated. The assumption inherent in these assertions is that one would only ever be running code in an environment where all other code is one’s own / trusted. What you’re describing would invert ownership in a way that prevents maintaining a boundary between trusted and untrusted code. Also note that an element’s own properties could have been defined as unconfigurable or the object could have [[IsExtensible]] false, and that attempted reinitialization of private fields throws a TypeError. Therefore ‘reconstructing’ the same object is guaranteed to throw if it has private fields.
There’s a parallel, yeah. But this reversed scenario doesn’t create the same problems. Actions which would prevent the constructor from succeeding would, well, prevent it from succeeding. This is different from actions which (silently, in part) can invalidate the internal state of existing elements remotely from any source. |
Aaargh, Custom Elements are too complicated. Here's an idea: we should introduce a new type of modern Document that ShadowRoots can opt into using, and this new Document would drop all the bad parts of traditional DOM and keep only the good parts, and work with Proxies, etc. |
F2F resolution: This is a duplicate of #820 |
Hi,
I'm working on SpringType, a TypeScript-based pure web component framework.
An important part of the developer experience is HMR, hot module replacement, so that code changes in web components are detected, re-transpiled and injected in the current developers browser context on the fly.
It's all shiny and just some LoC away until it comes to re-defining the web component constructor in the browser context. By the means of the current spec and impl it seems to be impossible to do so, because 1+n calls to
window.customElements.define()
with the same element name but different constructor function aren't allowed.That is kind of sad because it would be amazing to be able to patch a whole DOM tree on the fly. It seems like I'm not the only one having this issue.. In fact even the webpack guys are using some hack to get around this:
https://github.com/aruntk/wc-loader/blob/master/src/index.js#L19
They seem to just use a different element name each time they re-register a web component when HMR comes into play. But this comes with nasty side-effects, like CSS selectors would return "wrong" / different results as they wouldn't match as probably expected...
Basically, I could just do a whole page reload each time I detect a code change of a web component, but well, this is an argument against web components in general... to be forced to do a page reload in times of HMR and Virtual DOM algorithms isn't very cool :)
I just wanted to ask what the status of the debate is towards this?
Thank you! :)
The text was updated successfully, but these errors were encountered: