I'm a 25 year professional in the industry so I've never really thought of the hobbyist side. I use software from commercial Cadence and Synopsys that has a list price of over $1 million for a single physical design tool license and we use about 200 of those licenses simultaneously to tape out a chip. Then we spend about $30 million in mask costs. If we make a mistake it is another $20-30 million for new masks and another 4 months in the fab for a new chip.
Google / SkyWater / eFabless have this program for 90/130nm chips. That is really old technology from the 2002-2006 time range but it is still useful for a lot of types of chips.
I am curious what kind of hobbyist chips you want to make that can't be done in an FPGA? You can't do custom analog in an FPGA but these days you can find FPGAs with multiple PCIE / USB / HDMI serdes links, DAC, ADC, etc.
The problem with FPGA's isn't the hardware... it's quite complete and capable. It's the software stack you're forced to access most of them via that makes them a non-starter for many/most hobbyists. (cost is a major issue, but hardly the only one) My guess is that as low cost producers who have embraced the existing open source FPGA software being developed start producing higher end parts (more recent process nodes, large LUT count etc), that's when you'll really start to see them take off in the hobbyist world.
I think the OP isn't even talking about 2006-level capability but rather something akin to 3D printing for semis that approached even early (i.e. somewhere in the 1960's-1980's range) level fab capability for hobbyists. Obviously it doesn't exist currently, but that's the dream for some.
Cost of the FPGA design tools isn't really the biggest barrier. The big barrier is that the FPGA design tools from the FPGA companies are full of bugs that you spend an inordinate amount of time trying to work around. Some open source tools like Yosys have emerged. So far they're limited to the ICE-40 FPGAs from Lattice and a few older Xilinx parts. I've used them with the ICE-40 and they do work well. Hopefully FPGA companies will come to realize the value of opening up so that 3rd party tools can be developed for them.
> The big barrier is that the FPGA design tools from the FPGA companies are full of bugs that you spend an inordinate amount of time trying to work around.
you cannot fathom how bad eg Xilinx's FPGA tool suite is. like beyond the fact it's like some kind of frankenstein monster because they bought up a bunch of smaller tools and glued them together with fucking tcl, any individual piece is worse than your worst open source compiler/ide/whatever tool you use to build software. and yet people say that it's 10x better than the competitors' offerings. hell on earth is being an FPGA dev without a direct line to Xilinx employee for help debugging (which is how things really work professionally).
just lend some credence to myself here - vitis, their supposed hls offering that turns C++ into RL by magic and genius and oracles, today embeds LLVM 7.0 (released in 19 Sep 2018). gee i wonder how many perf and etc improvements have landed in LLVM since 7 since today LLVM is approach 17.
i could tell you more but it would just spike my blood pressure after 6 months of not having to deal with that mess.
Can confirm. I used Xilinx Vivado to compile for their FPGAs in Amazon F1 servers a few years ago. There came a point when each time I edited a single small Verilog file in my small project, the GUI took 40 minutes of 100% CPU just to update the file list in the GUI. That's before you could do anything useful like tell it to compile or simulate. The GUI wasn't useful until it finished this silly update.
I knew FPGA compilation could be slow, but this wasn't compilation, this was a ridiculously basic part of the GUI. I knew it had to read my file and analyse the few module dependencies, but seriously, 40 minutes? At that time I just wanted to run simple simulations of small circuits, which shouldn't be slow.
In (open source) Verilator, the same simulations ran from the same source files in just a few millliseconds with nothing precomputed or cached.
I looked into what Xilinx Vivado was really doing and found a log indicating that it was re-running a Verilog read on the changed file several thousand times, each time taking a second or so.
That was such a ridiculous bug for software Xilinx said they had spent over $500M developing. If there were good parts of the software I didn't get to see them due to this sort of nonsense. I think it was fixed in a later version, but due to (yay) enforced licensing constraints I couldn't use a fixed version targeting these FPGAs. That's when I abandoned Vivado and Xilinx (and Amazon) for the project as I didn't have that much spare time to waste on every edit.
I am under the impression the current crop of open source FPGA and especially open source ASIC tools are much better designed.
> I looked into what Xilinx Vivado was really doing and found a log indicating that it was re-running a Verilog read on the changed file several thousand times, each time taking a second or so.
Just reading that made me nauseous, to think this forced so many trillions of CPU cycles to be spent on nothing, and nobody at Xilinx corrected it until some time later.
I started working with a Stratix 10 GX dev kit last year, and the Intel Quartus Prime Pro software. It's been a nightmare.
1. Unless you have a direct line to an experienced support engineer inside Intel, technical support amounts to pleading with new-hire support engineers who can't do more than try to look something up and regurgitate it back to you.
2. Sample designs don't work with the tools you have. A design might be for version 18 of Quartus and you have version 23. You can try to auto-upgrade the IP, and maybe 3/4 of the time it works. The other 1/4 of the time it's some obscure error you can't track down, so you're back on the "Community Site" begging for help.
3. Doing something like programming your design into flash on the S10 board involves lots and lots of research, including watching Intel produced YouTube videos.
I could go on, but the whole process is like running in wet cement.
> I could go on, but the whole process is like running in wet cement.
just quit. seriously it's not worth it. no job you have (not even low-latency work in HFT) will compensate you enough to make investing time, blood, sweat, hair, and sleep into this work worth it. there is a better, more rewarding, with an actually transferable set of skills, software job waiting for you somewhere out there.
> i could tell you more but it would just spike my blood pressure after 6 months of not having to deal with that mess.
There was a time when I could've transitioned to FPGA development and I was pretty keen to do so. But after using (fighting with) Xilinx's tools and watching other people who were deeper into the FPGA side wage similar wars I decided it just wasn't worth it. That it would be better just to stay on the software side of things where the tools are mostly open source and so much better. In FPGA development you just don't have any control over the backend tools - fortunately there are a lot of front-end simulation tools like Verilator that are open source and some limited backend tools like Yosys are open source. Hopefully open source will continue to make inroads, but it's slow going.
I get paid to do this professionally so I don't follow the hobbyist side but I've heard of people using this for FPGAs. I have no idea how it compares to commercial tools.
OSS CAD Suite is a binary software distribution for a number of open source software used in digital logic design. You will find tools for RTL synthesis, formal hardware verification, place & route, FPGA programming, and testing with support for HDLs like Verilog, Migen and Amaranth.
> I use software from commercial Cadence and Synopsys that has a list price of over $1 million for a single physical design tool license and we use about 200 of those licenses simultaneously to tape out a chip. Then we spend about $30 million in mask costs. If we make a mistake it is another $20-30 million for new masks and another 4 months in the fab for a new chip.
I've thought for a long time now that this is an area ripe for disruption. But it's very difficult to disrupt - it hasn't been yet. EDA software is probably the easier part to disrupt. Some open source EDA tools are out there, but not so much on the physical design side.
I've read there's actually some excellent ASIC open source design flows now, thanks to some open PDKs (notably Skywater) as well as recent funded research. I haven't tried the toolchains myself but they are readily available and you have courses like Zero to ASIC using them.
They don't target the advanced nodes where masks cost as much as the GPs prices (though $20-30M sounds higher than I expected even at the leading edge), but they are workibg their way forward and the space in general is being disrupted at last.
I've taped out 4 chips in 5nm and our managers and VP's have said about $30 million for a full set of masks (base layers plus metal layers)
The $20 million is for metal layers only for a respin.
We are moving to 3nm right now which will be even more expensive.
The open source tools can't handle the latest process nodes. Maybe in the future but this is a highly specialized area with tons of NDAs from the fabs for PDKs and DRC decks.
Google / SkyWater / eFabless have this program for 90/130nm chips. That is really old technology from the 2002-2006 time range but it is still useful for a lot of types of chips.
https://opensource.googleblog.com/2022/07/SkyWater-and-Googl...
https://www.skywatertechnology.com/technology-and-design-ena...
https://efabless.com/open_shuttle_program
I am curious what kind of hobbyist chips you want to make that can't be done in an FPGA? You can't do custom analog in an FPGA but these days you can find FPGAs with multiple PCIE / USB / HDMI serdes links, DAC, ADC, etc.