Hacker Newsnew | past | comments | ask | show | jobs | submit | jandrewrogers's commentslogin

The US started upgrading and modernizing its nuclear production capacity in earnest during the Biden administration, targeting a capacity of 80 weapons per year. It pre-dates the current administration by years.

This disclosure was pretty obviously intended to put international pressure on China to conform to norms on nuclear testing, while also letting them know that the US can detect their top secret nuclear tests despite their best efforts to hide them.

I'm sure the timing of the disclosure wasn't accidental but I don't see much nefarious about it. The US doesn't disclose their knowledge of this type of thing unless there is some geopolitical leverage in doing so, since it makes it easier for other countries to calibrate US intelligence capabilities.


It may also be because of the detection of Iodine-129 in seawater off the Philipines, which was eventually traced to the Yangtse River.

[0]: https://x.com/Rainmaker1973/status/2015081659805110532


Congress has a substantially greater impact on the business climate than the President.

And the president has enormous influence over what congress does (veto).

Of course everything is nuanced; the trend is merely interesting especially juxtaposed against people consistently voting for republicans for "economic" reasons.


That’s unless you have a Congress that lets the President usurp the pose of the purse that should be theirs and Supreme Court that rubber stamps everything he does

If you are referring to the current administration, SCOTUS has barely "rubber stamped" any of his actions, and has rejected several already (though we will see with tariffs... but Polymarket has it only 31% in favor of the president so at least the odds are in our favor).

You mean like he is now able to fire government workers, impound funds, fire people who were supposed to be independent of the executive branch, and basically saying he could commit crimes and send the national guard to states?

Where does threatening allies, tariffs and kidnapping foreign leaders fit into this?

He creates uncertainty and it’s hard to see how that helps the US economy.


And Congress is controlled by the president as overriding a veto is extremely difficult.

This president will also veto your congressional seat if you cross him. See what happened to the people who voted to impeach him, they got primaried out.

Except they have abdicated most responsibility, especially when the president is of the same party, for decades.

Many now talk like they work for the president.


I'm not so sure either have much impact. Economic policy doesn't change much between administrations and Congress has been ineffective for a long time. Politics is mostly culture war things these days.

The Fed seems to be the big driver of the economy. Other than that, the government is moving things at the margins. Even Trumps tariff shenanigans don't seem to have rocked the boat much.


> Politics is mostly culture war things these days.

“These days” like, since Eisenhower, or are we just posting absurdist nonsense as apologia? Economic policy doesn’t change much? Ok.


That is a nicely designed DGGS, a lot of attention paid to the details. I hadn't seen it before.

The author of A5 was recently featured on the Mapscaping podcast: https://mapscaping.com/podcast/a5-pentagons-are-the-new-best...

There is a lot of literature on join operations using discrete global grid systems (DGGS). H3 is a widely used DGGS optimized for visualization.

If joins are a critical performance-sensitive operation, the most important property of a DGGS is congruency. H3 is not congruent it was optimized for visualization, where congruency doesn’t matter, rather than analytical computation. For example, the article talks about deduplication, which is not even necessary with a congruent DGGS. You can do joins with H3 but it is not recommended as a general rule unless the data is small such that you can afford to brute-force it to some extent.

H3 is great for doing point geometry aggregates. It shines at that. Not so much geospatial joins though. DGGS optimized for analytic computation (and joins by implication) exist, they just aren’t optimal for trivial visualization.


S2 has this property https://s2geometry.io

Yes, S2 is a congruent DGGS. Unfortunately, it kind of straddles the analytics and visualization property space, being not particularly good at either. It made some design choices, like being projective, that limit its generality as an analytic DGGS. In fairness, its objectives were considerably more limited when it was created. The potential use cases have changed since then.

Is there a congruent DGGS that you would recommend?

None that are well-documented publicly. There are a multitude of DGGS, often obscure, and they are often designed to satisfy specific applications. Most don’t have a public specification but they are easy to design.

If the objective is to overfit for high-performance scalable analytics, including congruency, the most capable DGGS designs are constructed by embedding a 2-spheroid in a synthetic Euclidean 3-space. The metric for the synthetic 3-space is usually defined to be both binary and as a whole multiple of meters. The main objection is that it is not an “equal area” DGGS, so not good for a pretty graphic, but it is trivially projected into it as needed so it doesn’t matter that much. The main knobs you might care about is the spatial resolution and how far the 3-space extends e.g. it is common to include low-earth orbit in the addressable space.

I was working with a few countries on standardizing one such design but we never got it over the line. There is quite a bit of literature on this, but few people read it and most of it is focused on visualization rather than analytic applications.


Pointers to the literature please. I don't work in this space but love geometry.

I agree that the lack of congruency in H3 hexagons can cause weird overlaps and gaps if you plot mixed resolutions naively, but there are some workarounds that work pretty well in practice. For example, if you have mixed resolutions from compacted H3 cells but a single “logical” target resolution underneath, you can plot the coarser cells not with their native geometry, but using the outline of their children. When you do that, there are no gaps. (Totally unrelated but fun: that shape is a fractal sometimes called a "flowsnake" or a "Gosper Island" (https://en.wikipedia.org/wiki/Gosper_curve), which predates H3 by decades.)

That said, this feels like an issue with rendering geometry rather than with the index itself. I’m curious to hear more about why you think the lack of congruency affects H3’s performance for spatial joins. Under the hood, it’s still a parent–child hierarchy very similar to S2’s — H3 children are topological rather than geometric children (even though they still mostly overlap).


In a highly optimized system, spatial joins are often bandwidth bound.

Congruency allows for much more efficient join schedules and maximizes selectivity. This minimizes data motion, which is particularly important as data becomes large. Congruent shards also tend to be more computationally efficient generally, which does add up.

The other important aspect not raised here, is that congruent DGGS have much more scalable performance when using them to build online indexes during ingestion. This follows from them being much more concurrency friendly.


I appreciate the reply! So, I might be wrong here, but I think we may be talking about two different layers. I’m also not very familiar with the literature, so I’d be interested if you could point me to relevant work or explain where my understanding is off.

To me, the big selling point of H3 is that once you’re "in the H3 system", many operations don’t need to worry about geometry at all. Everything is discrete. H3 cells are nodes in a tree with prefixes that can be exploited, and geometry or congruency never really enter the picture at this layer.

Where geometry and congruency do come in is when you translate continuous data (points, polygons, and so on) into H3. In that scenario, I can totally see congruency being a useful property for speed, and that H3 is probably slower than systems that are optimized for that conversion step.

However, in most applications I’ve seen, the continuous-to-H3 conversion happens upstream, or at least isn’t the bottleneck. The primary task is usually operating on already "hexagonified" data, such as joins or other set operations on discrete cell IDs.

Am I understanding the bottleneck correctly?


A DGGS is a specialized spatial unit system that, depending on the design, allows you to elide expensive computation for a select set of operations with the tradeoff that other operations become much more expensive.

H3 is optimized for equal-area point aggregates. Congruency does not matter for these aggregates because there is only a single resolution. To your point, in H3 these are implemented as simple scalar counting aggregates -- little computational geometry required. Optimized implementations can generate these aggregates more or less at the speed of memory bandwidth. Ideal for building heat maps!

H3 works reasonably for sharding spatial joins if all of the cell resolutions have the same size and are therefore disjoint. The number of records per cell can be highly variable so this is still suboptimal; adjusting the cell size to get better distribution just moves the suboptimality around. There is also the complexity if polygon data is involved.

The singular importance of congruence as a property is that it enables efficient and uniform sharding of spatial data for distributed indexes, regardless of data distribution or geometry size. The practical benefits follow from efficient and scalable computation over data stored in cells of different size, especially for non-point geometry.

Some DGGS optimized for equal-area point aggregates are congruent, such as HEALPix[0]. However, that congruency comes at high computational cost and unreasonably difficult technical implementation. Not recommended for geospatial use cases.

Congruence has an important challenge that most overlook: geometric relationships on a 2-spheroid can only be approximated on a discrete computer. If you are not careful, quantization to the discrete during computation can effectively create tiny gaps between cells or tiny slivers of overlap. I've seen bugs in the wild from when the rare point lands in one of these non-congruent slivers. Mitigating this can be costly.

This is how we end up with DGGS that embed the 2-spheroid in a synthetic Euclidean 3-space. Quantization issues on the 2-spheroid become trivial in 3-space. People tend to hate two things about these DGGS designs though, neither of which is a technical critique. First, these are not equal area designs like H3; cell size does not indicate anything about the area on the 2-sphere. Since they are efficiently congruent, the resolution can be locally scaled as needed so there are no technical ramifications. It just isn't intuitive like tiling a map or globe. Second, if you do project the cell boundaries onto the 2-sphere and then project that geometry into something like Web Mercator for visualization, it looks like some kind of insane psychedelic hallucination. These cells are designed for analytic processing, not visualization; the data itself is usually WGS84 and can be displayed in exactly the same way you would if you were using PostGIS, the DGGS just doesn't act as a trivial built-in visualization framework.

Taking data stored in a 3-space embedding and converting it to H3-ified data or aggregates on demand is simple, efficient, and highly scalable. I often do things this way even when the data will only ever be visualized in H3 because it scales better.

[0] https://en.wikipedia.org/wiki/HEALPix


> If joins are a critical performance-sensitive operation, the most important property of a DGGS is congruency.

Not familiar with geo stuff / DGGS. Is H3 not congruent because hexagons, unlike squares or triangles, do not tile the plane perfectly?

I mean: could a system using hexagons ever be congruent?


Hexagons do tile the Euclidean plane perfectly. They are the largest of the three n-gons that do so.

That's not true when tiling the Earth though. You need 12 pentagons to close the shape on every zoom level, you can't tile the Earth with just hexagons. That's also why footballs stitch together pentagons and hexagons.

The observation that concatenative programming languages have nearly ideal properties for efficient universal learning on silicon is very old. You can show that the resource footprint required for these algorithms to effectively learn a programming language is much lower than other common types of programming models. There is a natural mechanical sympathy with the theory around universal learning. It was my main motivation to learn concatenative languages in the 1990s.

This doesn't mean you should write AI in these languages, just that it is unusually cheap and easy for AI to reason about code written in these languages on silicon.


It sounds like you’re referring a proof. Where can one find it, and what background prepares one for it?

Consider systems that require continuous active stabilization to not fail because the system has no naturally stable equilibrium state even in theory. Some of our most sophisticated engineering systems have this property e.g. the flight control systems that allow a B-2 bomber to fly. In a software context you see these kinds of design problems in large-scale data infrastructure systems.

The set of system designs that exhibit naturally stable behavior doesn't overlap much with the set of system designs that deliver maximum performance and efficiency. The capability gap between the two can be large but most people choose easy/simple.

There is an enormous amount of low-hanging opportunity here but most people, including engineers, struggle with systems thinking.


Some systems require a total commitment to the complexity because it is intrinsic. There is no "simple" form that also works, even if poorly. In many contexts, "systems thinking" is explicitly about the design of systems that are not reducible to simpler subsystems, which does come up in many types of engineering. Sometimes you have to eat the whole elephant.

There is a related phenomenon in some types of software where the cost of building an operational prototype asymptotically converges on the cost of just writing the production code. (This is always a fun one to explain to management that think building a prototype massively reduces delivery risk.)


This is the point we are at now with wide-scale societal technologies; combining the need for network effects with the product being the prototype, and no option but to work on the system live.

Some projects have been forced so far, by diverting resources (either public-funded or not-yet-profitable VC money), but these efforts have not proven to be self-sustaining. Humans will be perpetually stuck where we are as a species if we cannot integrate the currently opposing ideas of up-front planning vs. move fast and break things.

Society is slowly realizing the step-change in difficulty between projects in controlled conditions that can have simplified models to these irreducibly complex systems. Western doctors are facing an interesting parallel, now becoming more aware to treat human beings in the same way--that we emerge as a result of parts which can be simplified and understood, but could never describe the overall system behavior. We are good examples of the intrinsic fault-tolerance required for such systems to remain stable.


When I was in my 20s I worked for a well-known global telco. In our office, we had a group of people whose literal job was watching streaming porn from around the world all day. They had walls of screens running simultaneously.

Those streams were customers. Our people’s job was to monitor the streams for video and audio quality issues. When I would tell my friends that I worked with guys who’s literal job was watching porn on a sofa all day, they thought it must be the best job in the world.

But when I talked to the guys that actually had the job, they said it was a terribly boring chore. Even worse, they said you quickly become so desensitized that it bled over into their non-work life in a negative way. Almost everyone that had that job eventually grew to hate it.

These kinds of jobs have always existed. To some extent someone needs to do it. While we may be outsourcing it now, there is a long history of paying people in the US to do it.


I can only imagine that watching abuse/porn all day long may cause a person to change- possibly even lead to suicide.

The President is the representative of the constituent State governments of America, not the people. That is why it is the States that vote. The only part of the Federal government that is intended to proportionally represent the people, and is in practice, is the House of Representatives in Congress.

This is a good and appropriate thing. States are approximately countries. Most laws only exist at the State level e.g. most common crimes don't exist in Federal law. The overreach of the Federal government claiming broad authority over people is an unfortunate but relatively recent (20th century) phenomenon. The US does seem to be returning to States having more autonomy, which I'd say is a good thing.


In my lifetime, I had my card details stolen once (in Washington DC). It was an American Express. They caught it immediately and shipped me a new card before I even noticed.

It was basically “we caught some shady shit, here is your new card number, which will be delivered today”. It is one of the reasons I like Amex. They are johnny-on-the-spot when they get a sniff of fraud.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: