Hacker Newsnew | past | comments | ask | show | jobs | submit | KronisLV's commentslogin

> constant churn of refactoring, reorganizing and spiking to fix self-inflicted defects

Sometimes there’s valid reasons for addressing technical debt and reworking things to be better in the future… and other times people are just rewriting working code because reasons™.

Encountering the latter can be quite demotivating, especially when it turns to nitpicking over small stuff or keeping releases back for no good reason. Personally, I try to lean away from that and more into the “if it works, it works” camp (as long as you don’t ignore referential integrity and don’t mess up foundational logic).


> This has never happened and never will. You simply are not omniscient. Even if you're smart enough to figure everything out the requirements will change underneath you.

My best project to date was a largely waterfall one - there was somewhere around 50-60 pages of A4 specs, a lot of which I helped the clients engineer. As with all plans, a lot of it changed during implementation, actually I figured out a way of implementing the same functionality, but automating it to a degree where about 15 of those could be cut out.

Furthermore, it was immensely useful because by the time I actually started writing code, most of the questions that needed answers and would alter how it should be developed had already come up and could be resolved, in addition to me already knowing about some edge cases (at least when it came to how the domain translates into technology) and how the overall thing should work and look.

Contrast that to some cases where you're just asked to join a project and help out and you jump into the middle of ongoing development, not going that much about any given system or the various things that the team has been focusing on in the past few weeks or months.

> It’s not hard to see that if they had a few really big systems, then a great number of their problems would disappear. The inconsistencies between data, security, operations, quality, and access were huge across all of those disconnected projects. Some systems were up-to-date, some were ancient. Some worked well, some were barely functional. With way fewer systems, a lot of these self-inflicted problems would just go away.

Also this reminds me of https://calpaterson.com/bank-python.html

In particular, this bit:

> Barbara has multiple "rings", or namespaces, but the default ring is more or less a single, global, object database for the entire bank. From the default ring you can pull out trade data, instrument data (as above), market data and so on. A huge fraction, the majority, of data used day-to-day comes out of Barbara.

> Applications also commonly store their internal state in Barbara - writing dataclasses straight in and out with only very simple locking and transactions (if any). There is no filesystem available to Minerva scripts and the little bits of data that scripts pick up has to be put into Barbara.

I know that we might normally think that fewer systems might mean something along the lines of fewer microservices and more monoliths, but it was so very interesting to read about a case of it being taken to the max - "Oh yeah, this system is our distributed database, file storage, source code manager, CI/CD environment, as well as web server. Oh, and there's also a proprietary IDE."

But no matter the project or system, I think being able to fit all of it in your head (at least on a conceptual level) is immensely helpful, the same way how having a more complete plan ahead of time can be helpful with a wide variety of assumptions vs "we'll decide in the next sprint".


I used not to capitalize "I" in my own writing, because it seemed a bit silly to do that, even though making it more distinct visually seems okay now, some years later.

At the same time, in my language (Latvian) you/yours should also get capitalized in polite text corespondence, like formal letters and such. Odd.


I disagree with your disagreement, for example HN is readable but the linked site feels too small for my eyes on a 21.5" 1080p monitor. It also doesn't respect browser preferences, unless you enforce a minimum font size (which can break display elements on other sites):

  font-family: Calibri, Candara, Segoe UI, Optima, Arial, sans-serif;
  font-size: 13px;
If the dev wanted a similar effect by default but be more accommodating, they could do:

  font-family: Calibri, Candara, Segoe UI, Optima, Arial, sans-serif;
  font-size: 0.8125rem;
There's no reason why you couldn't have smaller font while still respecting browser scaling. However, they might also want to just leave it at 1 rem and let the folks that prefer higher information density to customize their own browser settings, since those are what most well developed sites should respect and it might be more accessible by default on most devices (for my eyes, at the very least).

As for targeting specific screen sizes for non-standard font scaling, media queries also would help!

In regards to missing information dense pages, try changing your browser font settings, it might actually be quite pleasant for you to see many sites respecting that preference!


> I feel like most apps at most companies can just build an app and deploy it using some modern paas-like thing.

Most companies (in the global, not SV sense) would be well served by an app that runs in a Docker container in a VPS somewhere and has PostgreSQL and maybe Garage, RabbitMQ and Redis if you wanna get fancy, behind Apache2/Nginx/Caddy.

But obviously that’s not Serious Business™ and won’t give you zero downtime and high availability.

Though tbh most mid-size companies would also be okay with Docker Swarm or Nomad and the same software clustered and running behind HAProxy.

But that wouldn’t pad your CV so yeah.


> Most companies (in the global, not SV sense) would be well served by an app that runs in a Docker container in a VPS somewhere and has PostgreSQL and maybe Garage, RabbitMQ and Redis if you wanna get fancy, behind Apache2/Nginx/Caddy.

That’s still too much complication. Most companies would be well served by a native .EXE file they could just run on their PC. How did we get to the point where applications by default came with all of this shit?


When I was in primary school, the librarian used a computer this way, and it worked fine. However, she had to back it up daily or weekly onto a stack of floppy disks, and if she wanted to serve the students from the other computer on the other side of the room, she had to restore the backup on there, and remember which computer had the latest data, and only use that one. When doing a stock–take (scanning every book on the shelves to identify lost books), she had to bring that specific computer around the room in a cart. Such inconveniences are not insurmountable, but they're nice to get rid of. You don't need to back up a cloud service and it's available everywhere, even on smaller devices like your phone.

There's an intermediate level of convenience. The school did have an IT staff (of one person) and a server and a network. It would be possible to run the library database locally in the school but remotely from the library terminals. It would then require the knowledge of the IT person to administer, but for the librarian it would be just as convenient as a cloud solution.


I think the 'more than one user' alternative to a 'single EXE on a single computer' isn't the multilayered pie of things that KronisLV mentioned, but a PHP script[0] on an apache server[0] you access via a web browser. You don't even need a dedicated DB server as SQLite will do perfectly fine.

[0] or similarly easy to get running equivalent


> but a PHP script[0] on an apache server[0] you access via a web browser

I've seen plenty of those as well - nobody knows exactly how things are setup, sometimes dependencies are quite outdated and people are afraid to touch the cPanel config (or however it's setup). Not that you can't do good engineering with enough discipline, it's just that Docker (or most methods of containerization) limits the blast range when things inevitably go wrong and at least try to give you some reproducibility.

At the same time, I think that PHP can be delightfully simple and I do use Apache2 myself (mod_php was actually okay, but PHP-FPM also isn't insanely hard to setup), it's just that most of my software lives in little Docker containers with a common base and a set of common tools, so they're decoupled from the updates and config of the underlying OS. I've moved the containers (well data+images) across servers with no issues when needed and also resintalled OSes and spun everything right back up.

Kubernetes is where dragons be, though.


> That’s still too much complication. Most companies would be well served by a native .EXE file they could just run on their PC

I doubt that.

As software has grown to solving simple personal computing problems (write a document, create a spreadsheet) to solving organizational problems (sharing and communication within and without the organization), it has necessarily spread beyond the .exe file and local storage.

That doesn't give a pass to overly complex applications doing a simple thing - that's a real issue - but to think most modern company problems could be solved with just a local executable program seems off.


It can be like that, but then IT and users complain about having to update this .exe on each computer when you add new functionality or fix some errors. When you solve all major pain points with a simple app, "updating the app" becomes top pain point, almost by definition.

> How did we get to the point where applications by default came with all of this shit?

Because when you give your clients instructions on how to setup the environment, they will ignore some of them and then they install OracleJDK while you have tested everything under OpenJDK and you have no idea why the application is performing so much worse in their environment: https://blog.kronis.dev/blog/oracle-jdk-and-openjdk-compatib...

It's not always trivial to package your entire runtime environment unless you wanna push VM images (which is in many ways worse than Docker), so Docker is like the sweet spot for the real world that we live in - a bit more foolproof, the configuration can be ONE docker-compose.yml file, it lets you manage resource limits without having to think about cgroups, as well as storage and exposed ports, custom hosts records and all the other stuff the human factor in the process inevitably fucks up.

And in my experience, shipping a self-contained image that someone can just run with docker compose up is infinitely easier than trying to get a bunch of Ansible playbooks in place.

If your app can be packaged as an AppImage or Flatpak, or even a fully self contained .deb then great... unless someone also wants to run it on Windows or vice versa or any other environment that you didn't anticipate, or it has more dependencies than would be "normal" to include in a single bundle, in which case Docker still works at least somewhat.

Software packaging and dependency management sucks, unless we all want to move over to statically compiled executables (which I'm all for). Desktop GUI software is another can of worms entirely, too.


> catching when things go sideways

Curiously, this is where automated checks, that people have known are useful for years but haven't been implementing widely enough, come in really handy!

Not just linters and code tests, but also various checks in regards to the architecture - like how the code is organized, how certain abstractions are used (e.g. if you want to enforce Pinia Setup instead of Option stores, and Vue Composition instead of Options API; or a particular ASP.NET or Spring Boot way of structuring filters and API endpoints and access controls) and so on.

Previously we just expected a bunch of devs to do a lot of heavy lifting along the lines of: "Oh yeah, this doesn't match our ADR described there, please follow the existing structure" which obviously doesn't work when the LLM produces code at 10x the rate.

I think the projects that will truly work well with increased use of agentic LLM use will be those that will have hundreds of various checks and actually ENFORCE standards instead of just expecting them to be followed (which people don't do anyways).


> Many European companies would stop to a halt as they can't access any documents they have "on the cloud" or maybe can't even access their own phone or computer.

I hate that "Nobody got fired for choosing IBM" is a thing and that the people suggesting that we have good enough FOSS options when things were being planned out were probably given a dismissive look by the business people who were promised the sky by MS salesmen.

At least that's how I imagine it probably looked, given my own past experience of suggesting PostgreSQL and in the end the project going with Oracle (it's okay when it works, but for those particular projects PostgreSQL would have worked better, given the issues I've seen in the following years). It's the same non-utilitarian / cargo-cult thinking that leads to other solutions like SQLite not being picked when the workload would actually better be suited for it than a "serious" RDBMS with a network in the middle.

Apply the same to server OSes (Windows vs Linux distros and even DEB based distros vs RPM RHEL-compatibles), MS Office vs LibreOffice when you don't even need advanced features and stuff like Slack/Teams vs self-hosting Mattermost or Zulip or whatever. It's not even jumping on untested software, but fairly boring and okay packages (with their limitations known that are objectively often NOT dealbreakers) and not making yourself vendor-locked (hostage).

I guess I could also make the more realpolitik take - use MS, use Oracle, use whatever is the path of least resistance BUT ONLY if you're not making yourself 100% reliant on it. If Microsoft or Google decides they hate you tomorrow, you should still have a business continuity plan. If systems have standby nodes, why not have a basic alternative standby system, or the ability to stand up a Nextcloud instance when needed for example (or the knowledge and training on how to do that)? If people had govt. services before computers being widespread and you can have people processing a bunch of paper forms, then surely if push comes to shove it'd be possible to standup a basic replacement for whatever gets borked while ignoring all of the accidental complexity (even if it'd mean e-mailing PDFs for a while). Unless someone builds their national tax system or ID system on a foreign cloud, then they are absolutely fucked.


I don't think it's easy to replace ENTRA feature-wise with European provider.

Or github if you're using a bit more than self-hosted gitlab can provide.

It's not always about the location, it's usually about features (how it integrates into other hardware/software) rarely prices.

For example, can you suggest firewalls for offices that aren't either American or Israeli? We'd need something to replace Palo Alto, Bluecoat, Fortigate and Juniper. Also it'd be good to replace Cisco VPNs to be honest.

But it kind of must be feature parity, because (European) regulators hold our balls over hot coals.


Sophos

By gods, no....

But I take your answer as provided in the good faith.


For a company that (optionally) wants to self-host stuff, I'd say GitLab is pretty great - it's there for you, be it in the cloud or on-prem and mostly works, if you have enough resources to throw at the instance.

It's not as demanding as a some of the other software out there, like a self-hosted Sentry install, just look at all of the services: https://github.com/getsentry/self-hosted/blob/master/docker-... in comparison to their self-contained single image install: https://docs.gitlab.com/install/docker/installation/#install...

At the same time it won't always have full on feature parity with some of the other options out there, or won't be as in depth as specialized software (e.g. Jira / Confluence) BUT everything being integrated can also be delightfully simple and usable.

I will say that I immensely enjoy working with GitLab CI at work (https://docs.gitlab.com/ci/), even the colleagues on projects using Jekins migrated over to it and seems like everyone prefers it as well, the last poll showing 0 teams wanting to use Jenkins over it (well I might later for personal stuff, but that's more tool-hopping, like I also browser and distro hop; to see how things have changed).

However, it was a bit annoying for me to keep up with the updates and the resource usage on a VPS so that's why my current setup is Gitea + Drone CI (might move over to Woodpecker CI) + Nexus instead of GitLab, and is way more lightweight and still has the features I need. Some people also might enjoy Forgejo or whatever, either way it's nice to have options!


> It's just a database. There is no difference in a technical sense between "hallucination" and whatever else you imagine.

It's like a JPEG. Except instead of lossy compression on images that give you a pixel soup that only vaguely resembles the original if you're resource bound (and even modern SOTA models are when it comes to LLMs), instead you get stuff that looks more or less correct but just isn't.


It would be like JPEG if opening JPEG files involved pushing in a seed to get an image out. It's like a database, it just sits there until you enter a query.


> You must state the tool you used (e.g. Claude Code, Cursor, Amp)

Interesting requirement! Feels a bit like asking someone what IDE they used.

There shouldn't be that meaningful of a difference between the different tools/providers unless you'd consistently see a few underperform and would choose to ban those or something.

The other rules feel like they might discourage AI use due to more boilerplate needed (though I assume the people using AI might make the AI fill out some of it), though I can understand why a project might want to have those sorts of disclosures and control. That said, the rules themselves feel quite reasonable!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: