Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Regardless, it has been approved as part of JavaScript specification and Chrome folks refuse for political reasons to implement it, which given the market domination everyone has helped them achieve is a bummer.


Firefox have also chosen not to implement it, right? Indeed, my impression was that Firefox were far more against the whole thing than Chrome, who did implement TCO but then removed it for various reasons.

My impression is that this is less an issue with any particular implementor, and more with the feature not being fully thought through at the beginning. A lot of the browsers (including the Edge team at the time) ended up running into issues, on top of the UX/explanatory problems. That's why there was the alternative proposal for explicit tail calls, but those didn't go anywhere.


Firefox no longer matters, hence why I left it out.


1. Complain about market dominance.

2. Any 2% of the market doesn't matter.

3. …


Yes, those 2% only came to be because of folks pushing Chrome and Electron all over the place.

Now it is too late to save Firefox.


Right.

And apparently, the best way to reverse that situation is to ignore Mozilla (or any other player with single digit market share) at the W3C?

And do what instead? Side with Apple/Microsoft as long it hurts Google?


They said it was too hard, but now they’re adding it because of wasm, so their objection doesn’t matter.


I left the spidermonkey team a while back but was there when this issue came up and I was strongly against implementing support for it. For me it wasn’t about it being “too hard” (implementation difficulty wouldn’t have been that bad actually).

It was more that it forced the implementation to elide stack frames whenever calls occurred in tail position.

That’s a semantic change to program behaviour. And one that messed with the expectations of developers.

I would have been fine with some explicit syntax for invoking that behaviour, but it seems the standards process that got it approved never really consulted developers or implementors.

Do you know why the standard committee rejected adding explicit syntax for tail returns and instead went the route of declaring that all calls in tail position must lead to stack elision? I never really bothered to find out.

Also is spidermonkey now adding TCO to JS because of wasm, or are they adding TCO to wasm?


Most likely because FP languages that have as part of the language specification don't have special syntax for it, it just works, and the debuggers are able to provide a good debugging experience.

The only FP languages that require explicit syntax, target the JVM, which lacks the support.


It wouldn't just work for javascript and the way it's used, though. It modifies semantics in a way that would affect the behaviour of existing programs in production environments. It would break things that currently work, and it would break tooling that currently works.

Common tooling for production error tracking (e.g. Sentry) would be greatly affected by such a behavioural change.

Functional languages "just work" with those semantics because those expectations were set up ahead of time, and tooling and developer expectations were molded around them. But JS isn't one of those languages. The tooling and developer expectations, and gargantuan amounts of existing code, has been written with the expectation that calls create stack frames.

> The only FP languages that require explicit syntax, target the JVM, which lacks the support.

Support could be added to the JVM just the same as the JSVM though - via a spec change. And just as with the JSVMs, it would be a semantic change that would alter the behaviour of existing programs in ways that would break existing tooling and make developers lives worse. It's why both of them resist it.


It might give different tooling output, but the tooling wouldn’t break. Likewise, the only side effect of the code itself is that some of it would get faster because functions wouldn’t need to return.

The percentage of sites affected by automated stack tools is far less than the number of other sites that would benefit from tail calls.

More importantly, this breakage already happened 8 YEARS ago in Safari and 20% of all web users already break those expectations, so companies should have already adjusted. At this point, it only serves to split the web.


SpiderMonkey is still used in over 2% of the browser market share, and is used in non-Firefox applications too. I'd say it matters.


Sadly it doesn't, in many places being so low means the project acceptance matrix for target browsers no longer needs to contemplate Firefox.


Usually though when people make shit websites or products, needlessly not adhering to browser standards that would easily work in basically all modern browsers, because they cannot be bothered or don't have a clue what they are doing.


I find it surprising that the JS specification also includes optimizations an engine should implement. I'm not sure how difficult TCO is to implement, but I just checked if QuickJS supports it or not and it seems to be one of the few ES2020 omissions the author chose to make: https://bellard.org/quickjs/quickjs.pdf (3.1.1). This tells me it might be non-trivial to implement which if true makes the decision to make it a part of the specification all the more surprising, because that means the specification restricts the types of trade-offs implementers can make when attempting to achieve full compatibility. In other words, until today I was under the impression you could make a fully ES compatible engine, but choose to make it slow for implementation ease. Looks like the spec defines the floor for how (non)-performant the engine implementation can be in some cases. Is this common in other languages' specs?

Edit: Oh, seeing the sibling comment, are TCO and "tail calls" different things? If so, I remain unclear on the status of TCO support in QuickJS.


There’s a bit of a naming confusion around tail calls, but in any case “proper tail calls”, let’s call them that, are not precisely an optimization: they are a guarantee that any number of recursive calls of a certain kind will result in constant and not linear memory consumption. This permits some kinds of programming that can otherwise be quite awkward. If your program takes advantage of them (as e.g. often happens in Scheme), having tail calls is not an optimization issue, it’s an implementation correctness issue.

Now tail calls are quite annoying to implement in C on top of a compiler that doesn’t (and in fact I don’t think any mainstream platform has a C ABI that would allow a C compiler to natively support them between functions of arbitrary types). That has always been a (solvable) issue for Scheme-to-C translators, and it was probably a consideration for QuickJS. Chrome’s V8, though, is so far away from interpreting things in C or even translating them to C that I expect that any difficulties the developers have are of a completely different nature.

(As an example, LuaJIT does support tail calls but kind of sucks at inferring hot loops written using them, so has really impressively draconian limits on how many you can have before the JIT aborts. They otherwise work, though, as is required for a valid Lua implementation.)


TCO isn't really an optimisation per se, at least in the sense that optimisations typically aren't observable as part of the semantics of a language. TCO does affect the semantics; it says that a recursive function that recurses only using tail calls will never overflow the call stack. That's why it's something that might get specified in the specification rather than just left as an implementation detail: it's the sort of implementation detail that developers might actually rely on.

While in practice it probably improves the performance to have tail calls, theoretically, the spec isn't making a point about performance here, only semantics. It would, for example, be allowed to have a spec-compliant JS implementation that handled tail calls very slowly, as long as they are correctly don't increase the stack.

As to why various implementations don't do TCO, my impression is that it's a combination of complexity and usability issues that have kept it from being implemented. And if some of the major browsers won't implement it, then I can see why other, smaller implementations don't see it as a priority.


There's another consideration, you can introspect the stack using Error.stack (also Function.caller and friends?). It requires a spec change to say "it's okay to not not includes these stack frames in specific cases". This information would also presumably not show up in the devtools / debugger or sentry, etc.

I personally more agree with not implementing it implicitly (although I think there was some discussion about having a special syntax) just because of the potential for confusion during debugging. E.g. if all tail calls are elided, then you could have stack frames that don't "make sense" (how did function A call function C? well, through function B but that frame is gone, and this could be really difficult in deep stacks). Alternatively, you could use a heuristic but that just raises even more questions.


This solvable enough by activating a shadow stack. You can also just note on the trace pretty cheaply that X function was called by a tail call function. This eliminates any ambiguity about what happened.

It’s not a different issue from async functions not having a stack trace and JS devs survived for decades without async traces just fine.


Scheme (what JavaScript was based on) also requires tail call optimizations as part of its spec.


> I find it surprising that the JS specification also includes optimizations an engine should implement.

Algorithmic complexity is an essential part of defining an interface.


What "political" reasons?

My understanding of the situation is that it's entirely technical issues that came up after implementing.


Yes, what's really important here is that TCO was added to the spec before the modern TC39 process was adopted. Had implementation and usage experience been required before it shipped in the spec, this conversation would have been concluded 9 years ago.


I think the most cited reason they give is diminished developer experience. That is when developers write a recursive function without intending it to be tail call optimized, they loose their call stack. In theory this makes debugging harder. However I don’t—and others interested in JavaScript design—don’t buy this reasoning. There is little—if any—data showing this happens to developers in real life.

I also believe that Google’s representatives at TC-39 (and Mozilla’s to a lesser extent) knows this. Meaning they have other reasons for not implementing TCO. My personal theory is that they are very much against expending the functional paradigm in JavaScript. TC-39 has been very hostile towards any proposals which would expand the functional paradigm in JavaScript. The decision to advance the hack style pipeline operator (which uses a placeholder) over the F# (functional/tacit) operator is a prime example of this.

This, I believe, is purely political.


Having worked on debuggers in JS land I can tell you that it would generate lots of bug reports.

Does the developer expect a call stack?

Does the developer NOT expect a call stack? Maybe they want to see if the optimization was activated.

Bug reports either way.

That said, all sorts of weird changes happen when a debugger is enabled.

The jit will eliminate various pieces of code and remove variables from scopes, but you better put them back if the debugger stops! This one I remember in Firebug. Oops!

¯\_(ツ)_/¯


Safari has enabled proper tail calls for 8-9 years now and they haven’t caused issues despite Safari having 20% of the worldwide browser market.


I guess Safari doesn’t have this large a share of developers, who are the main concern here. But I guess we will truly see if this is an issue as Bun increases in popularity.


The major Google "objection" was automated trace loggers. When something goes wrong, they grab the stack trace and ship it off to the developers.

In this particular case, devs definitely see the Safari traces when bugs pop up.


I feel like it could be represented in the trace or the debugger or both and eliminate everyone issues.

Devs definitely don’t like surprises, but also absolutely hate their tools not telling the truth.


Sadly I sense something amuck against the functional paradigm as well.

Paul Graham wrote back in 2001 that Lisp was a secret weapon for building a startup due to the developer productivity gains. https://paulgraham.com/avg.html

As a corollary, the more lisp-like JavaScript is, the more it reduces barriers to entry for launching a tech company, promoting more software engineers into potential new competitors.

Could it be that Java and Python are taught in universities because some members of the industry prefer making it more cumbersome for new grads to launch a company? Having significant barriers to entry is business strategy 101.


Are you saying that knowing LISP makes you like an instant Paul Graham? If only.


Ew, I hope not. I know lisp, I do not want to be Paul Graham.


Nice try


There were no technical arguments from Google. They implemented tail calls and performance was fine. Furthermore, wasm required them to add the functionality back to the jit.

MS complained that windows APIs made it slow, but first, they should fix their OS. Second, chrome didn’t have those same issues on windows. Third, they now use chrome, so it’s no longer an objection.

Firefox complained that it was work, but they’re also doing it for wasm.

This only leaves the stack trace argument, but the biggest cases for tail calls are where you’d already be forced to a loop and nobody complains that loops don’t have stack traces.

Further, you already have the same issues with async stack traces and activating shadow stacks in debug mode works just as well as those async traces.

It’s all about chrome devs digging in their heels because they took a side dumb argument and won’t just swallow their pride and implement what developers want and the spec demands.


Anything to do with tail calls in WASM is likely to be completely irrelevant to proper tail calls in ECMAScript: they’ll be separate pieces of code with different reasons and balances.

Moreover, tail call elimination is explicit in WASM. The initially-defined call and call_indirect instructions aren’t allowed to do tail call elimination, and the tail-call proposal added return_call or return_call_indirect which require it. The situation is thus more like the (now inactive) TC39 Syntactic Tail Calls proposal. No need to worry about the debugging or performance implications.

> Firefox complained that it was work, but they’re also doing it for wasm.

They said it was impossible to implement across realms with their membrane-based security model. That is: “we could do it in most cases, but you’ve said we have to do it in all cases and we can’t do that without replacing a fundamental and pervasive part of our engine, which we certainly don’t want to do for something we and others aren’t convinced is even a good idea”. But in WASM, any tail calls are to the same WASM program, where this isn’t a problem.


WASM runs in v8 on Chrome. If you have to implement the bytecode for one, it should work for both. Chrome used to have semantic tail calls for JS implemented too.

FF runs WASM on Spidermonkey. If the security model is broken by JS, there's not a great reason to believe it won't also be broken for the same JIT when running WASM.


> Firefox complained that it was work

LOL, as they do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: