According to a Reddit comment [1], this is the same MicroBlaze RTL with a RISC-V instruction decoder in front of it. This seems crazy from a let's-make-the-best-RISCV-core perspective, but that's never been Xilinx/AMD's goal.
MicroBlaze has always been a great example of a boring in-order RISC CPU in a boring niche. For an FPGA vendor, soft cores are loss leaders: they sell silicon but don't make money on their own. They are also boring technology: they are "integration glue", and don't belong in the portion of the FPGA that drives performance. "Good enough" is good enough.
If AMD really is reusing MicroBlaze RTL, then they're able to keep their existing firmware (core, FPU, debug, peripherals, etc) and software (HAL, compiler, drivers). These are all highly desirable from the perspective of the vendor, and any users looking for a painless transition to the new MicroBlaze core.
That Redditor has -5 karma. Also that idea makes pretty much zero sense when it's a matter of a couple of days to implement a simple RISC-V core from scratch.
I would not rely on that information.
It does, however, have the same external interface as Microblaze and hardware wise is a drop in replacement in existing designs.
I posted the Reddit link because it wasn't my insight - but I agree it's a dubious source. That doesn't make it wrong. (I'm not certain it's right, either - it's an interesting claim.)
A RV32I core is easy to implement from scratch, but Microblaze-V already has a single-precision FPU, and it will need an MMU to reach feature parity. It's much bigger than a weekend project to produce a RISC-V core that's feature-matched with MicroBlaze.
> For an FPGA vendor, soft cores are loss leaders: they sell silicon but don't make money on their own.
This is dead right, the enabling technologies like this don’t make money in their own right so they aren’t considered valuable in hardware companies. It’s why the billionaire CEOs of Xilinx and Altera shake their heads ruefully when they hear Jenson Huang continue to throw money away on nvidias software stack. One day he’ll learn where the real value is.
> This is dead right, the enabling technologies like
You fundamentally misunderstand. Soft core CPUs aren't enabling technologies and haven't been for decades. They are plumbing, like FIFO or SERDESes. You can't sell an FPGA into most markets without them.
All the other major FPGA manufacturers already started offering an official supported RISC-V core as an alternative to their proprietary ISA cores a few years ago. Of course people also use 3rd party cores off github, but supported and integrated into the IDE and other tools means something to a lot of customers.
MicroSemi have been offering RISC-V soft cores since 2017 and hard cores (PolarFire SoC, in e.g. the new BeagleBoard Fire, Icicle) since late 2020.
Lattice announced their first official RISC-V soft core in I think June 2020 (collab with SiFive announced December 2019), and improved versions e.g. an 800 LUT core in mid 2021.
Microblaze was in the gate count that is being actively decimated by RISC-V. Just like tensilica, arc, etc. have lost most their value add in the space. Having personally ported a kernel to microblaze, it's basically a halfway point between MIPS and SH4, classic pipelined RISC in the ~20k gate range.
Probably the most interesting part of this announcement is that they're so in that they're straightup redefining their trademarked term "Microblaze" rather than making a new term and continuing to support classic Microblaze updates.
Microblaze is designed for optimal mapping to Xilinx FPGA cells, it doesn't make sense to compare ASIC gate count.
Also, their secret sauce is the ecosystem and tooling around it. If you don't use that, you are not their target and are free to use any open source riscv core you please.
Back in the day, SiFive had a RV32 core that could fit in an Artix-7 that was pretty bare-bones. IO was random, it couldn't use the onboard DRAM, etc. It would be cool to see an officially supported RV32 softcore on an Artix-7. I had reasonably good experiences with the MicroBlaze back in the day, but would never think of using it for anything other than testing or education 'cause it was (very) closed source. And while I'm not the worlds biggest RISC-V cheerleader, this is the kind of thing that it's good at: "here's a tool that uses an ISA you may have already invested in. also, we're not going to try to lock you in to THAT side of the toolchain." I'm somewhat okay with AMD/Xilinx locking you in to whatever is underneath the ISA since you're probably going to have to pay for hardware somehow: either buying the FPGA or buying a catalog part (if they ever emerge.)
Also... thx for pointing out AMD bought Xilinx. I had completely forgotten about it and was mildly surprised to see AMD adopting MicroBlaze.
> Back in the day, SiFive had a RV32 core that could fit in an Artix-7 that was pretty bare-bones.
I did some benchmarking on a single 64 bit dual-issue U74 core with FPU, caches etc running in an Arty-100T. Just a single core, not all five cores as in U74-MC (HiFive Unmatched, VisionFive 2 etc), but it was running Linux.
My memory is the E31 (RV32) used about 70% of the ARTIX-7 35T. Bruce probably remembers the stats on the U74 (RV64) better than I do. People kept stealing the ARTIX-7 100's off my desk so I never had the opportunity to play around with them.
Xilinx is a pretty big FPGA manufacturer and Microblaze offers a nice selection of fault tolerance features, including TMR. TMR on RISC-V is not novel here, but it does mean the large number of projects already using Microblaze, and new ones that want to use Microblaze, can now use RISC-V.
Every Western Digital HD, the Titan M2 in Pixel 6 and 7, newly manufactured Seagate HDs, and more… I’d say that counts as popular. Is it as popular as ARM? Technically, nothing comes remotely close to ARM in chips shipped. If that’s your measure, X86 isn’t popular.
You probably have 2 or 3 8051s in your wallet. Virtually every appliance which uses electricity and has a display has one or two. And sure... you probably have more Pixel 6's and SD Cards than lightbulbs, but most homes in America have FAR more 8051s than RV32s.
That is, the only advantage MicroBlaze V gives you, besides blessing from the chip manufacturer, is speed. Aren't FPGA CPUs usually used to do tasks that aren't especially time-sensitive? I mean, that's what the FPGA fabric is for, to get high-speed, time-sensitive tasks done (in conjunction with the on-chip I/O interfaces).
Tooling and support. You don't really want to mess around with a RISC-V softcore, you just want it to work and interface with the stuff you actually care about already.
MicroBlaze allows you to literally construct your own softcore via drag-and-drop, selecting from a wide variety of configuration options and peripherals. It includes SDKs for your custom applications, and debugging tools to figure out what's going wrong. I would not be surprised if development using SERV would take several orders of magnitude longer, solely due to immature tooling.
I’m all for new soft cores. However, do we really need to pollute the name space? Microblaze is already a name of an architecture that somebody might want to Google.
Hardware compatible doesn’t mean very much in this case. It’s a different ISA so you’ll need a different tool chain and recompilation (possibly rewrite given that assembly is very common in deeply embedded processors). It’ll also have different LUT usage, etc.
It means it's designed as a direct drop in replacent for a traditional microblaze ISA code..sure software must be recompiled, but hardware interfaces match what was there before for peripheral access, debug, instruction offload, etc. This makes the transition to using RISC-v much more seamless if you were already using microblaze
CPU is just one part. You're probably adding your own coprocessor and whatever other IO to the FPGA. If the CPU uses a different interconnect/protocol design, you'll have to redesign and debug your entire interface which will be very expensive.
Meanwhile, microblaze is a proprietary architecture. AMD puts a few hundred thousand dollars of dev time into it here or there, but most software stacks will have tons of stuff not at all designed for that ISA. If you can switch to RISC-V, you get access to an ecosystem where tons of companies are all pouring multi-millions in every year.
EDIT: a great example is that microBlaze was added to LLVM in 2010, but then removed in 2013 because nobody was free to maintain it.
Now AMD comes along and says you can swap out the CPU ISA to RISC-V without needing any hardware redesigns and then with a simple recompile, you can start using that massive ecosystem instead of spending man-years writing your own stuff which saves your company lots of money.
It has the same interface to the rest of the design -- the buses and control signals etc. You can drop the new RISC-V soft core straight into an existing design using Microblaze and change only the program code.
I assume it means it's essentially pin-compatible, like, say, a PCIe GPU upgrade would be.
Still needs new software.
But has the same HDMI and PCIe and the same low-level protocols.
Soft cores don't have their own pins. I think "hardware compatible" means that it runs on the same FPGAs as microblaze arm with the same toolchain product suite.
"Pins", in this case, means the logical ports used to interface one module within an FPGA to another. Think AMBA bus, clock(s), interrupt lines, etc. An FPGA SOC design that currently throws a bunch of different IPs (modules) together to do... something... could do the same with this core thrown in to replace a (classic) Microblaze with only a resynthesis on the hardware side, and a recompile on the software side.
It is the same interface facing your HDL. Whereas on the software side, it is RISC-V, the ISA that's rapidly growing the strongest ecosystem.
Just a recompile away, except this time, you don't have to deal with poor, proprietary, bespoke, low quality tooling, but you can instead use reputable industry standard tools like gcc, binutils, llvm.
I'm pretty sure the "bespoke" tooling was just older versions of those industry standard tools because of poor upstreaming of architecture support. And there are probably many, very reasonable, justifications for that having been the case... the least of which is shipping toolchains with their own design tools.
RISC-V doesn't seem like the "strongest" ISA ecosystem, unless you mean among soft core CPUs?
Strongest of any ISA after ARM and x86 at this stage, I should think.
I can't really see AMD/Xilinx or Intel/Altera offering a free 486SX soft core, even though they legally could. It would I expect use a lot more LUTs, and be slower.
Among 32-bit microcontroller-class architectures, modern toolchain support for rv32imc is second only to armv7m IMHO. Nobody wants to maintain support for any other vendor-specific ISAs.
I doubt that, but AMD said that it is free to use in any AMD (Xilinx) FPGA.
Had it required to pay a fee, there would have been no interest in this announce, because there are free alternatives.
It can be assumed that this core is well optimized to use efficiently the resources of a Xilinx FPGA, which is likely to be its advantage over alternatives.
While it may not have an open source license, the source code for previous microblaze cores is part of the development kit, lightly obfusticated (but not even removing source code comments!)
would love to get like a soft core overview of all these riscv cores. Like are they open source, what sort of coremark score do they get, how big are they, etc
That said, instead of a 32bits core, I would have prefered a 64bits core, because once I write 64bits RISC-V assembly code paths, I could really re-use them on desktop/server/embedded.
> because once I write 64bits RISC-V assembly code paths, I could really re-use them on desktop/server/embedded.
Not really. There's not much overlap between "desktop/server" (a 64 bit only affair) and "embedded" (mostly 32 bit cores). Not to mention that apart from 32<->64 bit, the programming environment tends to be very different between these: system complexity, boot procedure, how to interact with outside world - just to name a few.
In short: pick target device(s), code for that. Want code to move easily between different types of devices? Then code in something other than assembly.
> There's not much overlap between "desktop/server" (a 64 bit only affair) and "embedded" (mostly 32 bit cores).
Most code compiles nicely targeting either 32-bit or 64-bit platforms. Certainly, there's quite a lot of code that's architecture-specific, and some code that's useful in general in embedded environments and less in full-fledged PCs/servers, but - the symmetric difference not being empty doesn't imply the overlap is empty.
Note that they were talking about _assembly_ code paths and parent explicitly mentioned using anything else then assembly would give you more portability.
But I don't even want to deal with those macros, just a nearly zero straight copy/paste. The preprocessor scares me, because once some devs starts to use it, many will want to push to max its usage everywhere and in the end, your code path is actually tied to the preprocessor, and sometimes with actually a whole new assembly language defined with the preprocessor, and if it is complex, the "exit cost" from it will sky rocket.
For instance fasmg has an extremely powerful macro preprocessor, because this preprocessor is actually what's used to write assemblers. Due to the tendency of devs to maximize the usage of their SDK tools, some code paths actually end up with little assembly and a lot of macros! Then you must embrace that new language as a whole, like the opacity of the "object oriented model" of some big c++ projects BEFORE actually being able to do anything real for this very project.
Personnally, I use a "normal" C preprocessor to define a minimal layer for a intel syntax assembler, which allows me to assemble with fasmg, gas and nasm/yasm. And I am very careful at assembling all the time with all assemblers.
I do the same with C using cproc/qbe, tinycc, gcc, and I plan to add simple-cc/qbe. I actually do compile up to 1.1 times... the 0.1 accounting for cproc/qbe + tinycc and gcc get the rest.
> There is so much code actually shared, or with very little variations, that it is a big win for code re-use to stick to 64bits, even for "embedded".
"Embedded" is a broad concept. Sure there's applications where the size/cost penalty of a 64 bit core is insignificant. Or you want it anyway to use software designed for it (like common Linux distributions).
But there's also a loooottt of applications where you really want the simplest, lowest cost, lowest power cores available: solar powered mesh sensor networks, RFID tags, tiny controllers in eg. an USB peripheral, AA powered childrens toys, that clock in your microwave, etc, etc, etc. Often referred to as "deeply embedded".
For such uses, 32 bit cores like ARM Cortex-M series or RV32I (or even -E) rule. To the point that even ancient 8/16 bit architectures like 8051 are still used.
It would be dumb to put a 64b core there just to 're-use code paths'. C compilers are a thing, and assembly programmers know how to handle different ISAs.
This is exactly what I do not want to do: use a C compiler. And even though 32bits RISC-V assembly is very close to 64bits RISC-V assembly, I don't want to rely too much on an assembler macro pre-processor to do the switch.
And yes "embedded" is nowdays quite "broad". Of course, for the "very" "embedded" even a 32bits RISC-V core is overkill, a broadly available 8bits processor will be enough if you really are going for the bucks, which I am not going for.
In my world, I would have 2 targets, a mini domestic server (email, maybe a few small web servers, more?). A linux based OS would be the start, but I guess it would end up more like a custom patchwork of code paths from various projects than vanilla "linux". There are already many 64bits RISC-V SOC with existing boards for that. Then a mini-board with a SOC with at its center a 64bits RISC-V MCU for a DYO custom keyboard, which would require a USB device hardware block, flash memory, timer, a lot of GPIOs, etc. There are already some options here too, but a tad too overkill (often with "AI").
Ofc, if I could get a RISC-V powerful workstation on which I could play AAA 3D games...
Think new processors need to have 264/265 support so it can be used for many popular different applications, thus making them more popular. Trying to use a pi as a desktop replacement kinda sucks if you watch youtube or want to encode. A nice bonus would be some sort of AI acceleration too.
Softcores on FPGAs tend to be used for housekeeping and babysitting silicon IP blocks. They never handle any actual data processing, because doing so in the fabric is orders of magnitude faster.
The fastest softcore cpu at maybe 400mhz on a thousand-dollar FPGA is hardly faster than the slowest risc-V silicon.