Performance tests aren't unusual. But sometimes things get slower out of necessity. It's impossible for a test to automatically distinguish between intentional and unintentional slowdowns. At some point you have to have someone make a judgment call about updating the test to accept the new state of things. Or draw a hard line and say things are never allowed to get slower no matter what, but that can be a tough goal to maintain.
But that's not what is in the whole context. The whole context contains a lot of noise and false "thoughts".
What the AI needs to do is to document the software project in an efficient manner without duplication. That's not what this tool is doing.
I question the value in storing all the crap in git.
I was so confused. Why is domain driven design especially good for debugging? I guess context is bound within the models... And then all the other comments were just talking about debugging tools. Glad I was not the only one.
Browser monoculture is bad for the open web and if all we have is Webkit (Safari on iOS, Macs) and its fork Blink for all the Chromium browsers, then the web will start becoming a mess of proprietary extensions instead of open standards.
I see this claim often. As someone who learned web dev during the days of IE dominance, I don't understand it.
Internet Explorer never kept up, especially after IE6 reigned supreme. They weren't "a little behind" or didn't have some more niche APIs missing or implemented in a buggy or proprietary way. It actively ignored standards, it didn't receive real updates for a long time (IE11 being the fruition of what the best they could offer was) and generally with few exceptions (namely, the invention of CSS Grid and XMLHttpRequest) generally degraded the ecosystem for over a decade. It actively held back companies from adopting new web standards. Its why polyfilling became as proliferated as it is now.
Safari / WebKit has not induced any of this. Yes, sometimes Safari lags behind in ways that are frustrating. Yes, sometimes Apple refuses to implement an entire API for political rather than technical reasons (see the FileSystem API), but largely it has managed to stay up to date with standards in a reasonable time frame.
While their missing or subset implemented APIs can feel really frustrating, they haven't actively held back any work nor the mass adoption of newer browser APIs.
Apple has their faults, but this isn't even close to the drudgery that was the IE heyday era.
I've not used Claude Claude yet, but why would it be bad if it gains features that people use?
Did people ever complain about Photoshop to have too many features demanding some cognitive load? Excel? Practically every IDE out there?
There is a reason people use those tools instead of the plain text editor or paint. It's for power users and people will become power users of AI as well. Some will forever stick to chatgpt and some will use an ever increasing ecosystem of tools.
good question. the difference with AI tools is the interface isn't stable in the same way photoshop or excel is. with traditional software you learn it once and muscle memory carries you. with LLM tools the model itself changes, the optimal prompting style shifts, features interact with model behavior in unpredictable ways. so the cognitive load compounds differently. not saying features are bad, just that the tradeoffs are different
I don't know I tend to either come across new tools written in Rust, JavaScript or Python but relatively low amount of C. The times I see some "cargo install xyz" in a git repo of some new tool is definitely noticeable.
reply