As an amusing side-effect, the team working on this effort also implemented IRIX userland support for QEMU since the original N64 toolchain ran on IRIX on the SGI Indy, and they need the original compilers to verify functional equivalence of their source: https://github.com/n64decomp/qemu-irix .
I honestly love coming to HN to see posts like this and comments like yours. It is always so neat to see the other sides of software engineering. You listed 4 acronyms and I have no idea what any of them are or how they fit into this story but all I want to do is deep dive into each one. It is also awesome to see people so interested in things that I've never even encountered before.
"In 1970, the group coined the name Unics for Uniplexed Information and Computing Service (pronounced "eunuchs"), as a pun on Multics, which stood for Multiplexed Information and Computer Services."
> QEMU [3] is an emulator used to run programs for one machine on another.
More specifically one processor architecture to another. E.g. running on your desktop (usually an x86 based architecture) a Linux Operating System designed and compiled for a Raspberry Pi (ARM based architecture) and it's incompatible architecture. In this case they're running software designed for the same processor that the Nintendo 64 was targeting which so happens to also have ran a Unix OS known as IRIX.
I wrote to SGI in high school asking for some info on their computers and they sent back a stack of beautifully printed, full-color brochures. The Indy had a webcam, which was very rare in those days. Also included was a brochure on the Indigo workstation, which Industrial Light and Magic used for Jurassic Park, etc.
Nintendo is a little mysterious when it comes to what their actual tooling was, but I remember Donkey Kong Country being the first time I read they were using SGIs (or at least the studio "Rare" was).
It's somewhat surprising they used the Indy for developing Mario 64 – I always got the sense that it was somewhat lightweight in performance compared to the Indigo, but a very cool machine either way.
I have an SGI rotting in my garage, what's amazing about them is the quality of the monitor. For CRT displays, the best damn monitor I ever experienced, just CRISP.
The Nintendo 64 had a MIPS R4300 chip, the SGI Indigio also used the MIPS Rchip, the early one had R4000/R4400 chip, the later ones R8000+ chips. I can only speculate that by using SGI, you could run some of your non specific N64 code locally and debug faster.
Original PSX had a R3000 chip, but Sony opted for BSD, their devkit ran on FreeBSD PCs and you built the code and ran on actual PSX device. Cheaper...
The Playstation 1 "TOOL" actually ran windows [1]. A large success of the the PS1 however was the the "twin ISA" card dev kit, which could be plugged into any PC-Compatible for PS1 development, which drastically lowered the cost of development for the PS1.
Also BSD != FreeBSD, BSD 4.3 Net/1 (the first BSD released under the BSD license instead of containing AT&T code) was released in 1989.
Was FreeBSD really a requirement? I used to have a Sony Net Yaroze that allowed me to build PSX executables on my PC, using Sony's custom GCC-based toolchain. It didn't require FreeBSD.
Those brochures are probably worth real money on eBay if you still have them, a PowerSeries brochure just sold for $200!
By the time the Indy came out, the Indigo2 had replaced the Indigo, and I suspect a midrange Indy was a good match for a midrange Indigo1 (at much lower cost).
Nintendo made an N64 dev board for the Indy, essentially an N64 on a GIO board, complete with an adapter card to connect controllers.
The joke always was that the Indy was the Indigo without the go :-)
But it was a decent enough machine to develop on, you didn’t need the 3D stuff if you spent all day in Emacs or compiling. Whereas an Indigo was really targeted at say CAD users.
Haha that was because the base Indy was shipping with 16MB of RAM and IRIX 5 was too bloated for that to be usable. Meanwhile everyone with Indigos kept running IRIX 4 until things got better around 5.3.
The Indy had XZ graphics available, which I believe were the same as the top Elan option available on the Indigo (4 GEs)
"We get the inside story on the legendary Rare with an all-star panel - David Doak (GoldenEye), Chris Marlow & Shawn Pile (Conkers Bad Fur Day, David Wise (Donkey Kong Country series) and Kevin Bayliss (Battle Toads/Killer Instinct)"
QEMU is thought of as a hardware emulator, but supports "userland" emulation where the processor ISA is emulated but syscalls and memory are translated to the host OS.
One very cool thing that you can do with it is to use binfmt_misc to tell the kernel to use `qemu-arm` to run ARM binaries, then you can chroot in to an ARM device's filesystem from your x86 workstation, and all of the ARM binaries just work.
I was just dealing with this today. QEMU was too slow on my MacBook Air though.
Do you have a link to a comprehensive guide on doing this by chance? I was thinking tomorrow I’d just launch an arm instance in AWS and figure it out but I have a dual Xeon workstation at work (windows) that I might try as well.
(Some additional steps needed if you want to use regular chroot instead of nspawn.)
Sometimes qemu shows an error saying some operation isn't supported, but this hasn't broken anything yet for me, even after I did a whole Raspbian Stretch -> Buster upgrade this way.
BTW, with Debian buster and later, you won't have to copy the qemu-arm-static binary around, since the Linux kernel will now use the file from outside the chroot/container.
On a distro like debian you can even use it to build-and-run userspace binaries for an unrelated architecture (some chroot magic was required last time I checked).
You (1) use binfmt_misc to tell the kernel to use `qemu-ARCHITECTURE` to run binaries for that architecture, then (2) make sure you also have all of the libraries that the binary is linked against, then that binary executable should just run seamlessly.
Now, if your ARM binary was compiled to look for libc at /lib/libc.so, but /lib/libc.so is the host's x86 libc, then that obviously won't work; and the easiest way to get the libraries all sorted out is to use a chroot with OS install of the target architecture. If you do go the chroot route, you need to make sure that `qemu-ARCHITECTURE` is statically linked, because it won't have access to the x86 libraries it needs to run after the chroot(2) call happens (which is why most normally-dynamically-linked distros have a "qemu-user-static" package in addition to their normal "qemu-user" package).
But with a multilib scheme like Debian's, where all libraries get installed to /lib/ARCHITECTURE-TRIPLET/ instead of /lib/, then it should be possible to install all of the appropriate target libraries on the host system without a chroot! You "should" just need to configure APT to let you install packages built for that architecture. (I haven't actually tried this; I'm not a Debian user, but I am envious of their multilib).
I've used this to run some 32-bit Linux binaries under Windows Subsystem for Linux (WSL), which only natively supports 64-bit binaries. (Recompilation for 64-bit was not an option.) It wasn't ideal but it did work smoothly for the most part. I just used `dpkg --add-architecture i386 && apt update && apt install libc6:i386` rather than creating a separate chroot. I did have to edit the binfmt registration to remove the 'OC' flags set up by the qemu-user-binfmt package, since these aren't supported by WSL, and manually enable the i386 binfmt which is blocked by default on amd64 platforms. There is also a persistent SIGSEGV in one particular binary which may not be related specifically to running under QEMU.
> It means they managed to get irix running in qemu. Presumably on an x86 cpu.
Not quite. It means that they got qemu to emulate IRIX's syscall layer on linux. So you can run, lets say, a MIPS IRIX binary on x86 linux without having to emulate the entire machine.
No - qemu supports "userland" emulation where the processor ISA is emulated but syscalls and memory are translated to the host OS. The IRIX kernel and OS doesn't run in this scenario.
Wine impersonates OS calls, (including syscalls) but does not perform emulation on the binary itself. Wine can only run windows applications written for x86, but not windows applications written for itanium.
This appears to be running both hardware emulation on the supplied binary, (which is what VMware/KVM/virtualbox etc do) as well as wine-like OS impersonation.
I made up the word "impersonates" for what wine does just to avoid confusion. It's not a word that's used in the literature afaik, although perhaps it (or a word like it) should be.
I think the usual term Wine (plus e.g. WSL1, Darling, Solaris/BSD Linux compatibility shims, etc.) uses is "translate", but "impersonate" does sound closer to what such systems actually do.
Speaking of Wine, you can actually run x86 Wine using QEMU on a Raspberry Pi and run Windows software with it. You essentially chroot into an x86 Debian environment that's running with QEMU, then install Wine in there and run it. There's a product called 'ExaGear Desktop' which makes the process pretty seamless from what I hear.
WINE is not emulating/translating instructions to a diffrent ISA. It rather has a win32 loader and inserts some shims to map some calls to windows library functions and others to native(as in host ex. Linux). That's how I understand it. You can however theoretically run x86 WINE to run a x86 windows binary on ARM with Qemu user emulation.
That's a Windows limitation. X64 chips are plenty capable of running 16-bit protected mode code while the OS runs in long mode. It's that Windows didn't want to deal with translating HANDLEs back and forth between the two modes.
Wine has never ran 16-bit Windows programs in 16-bit mode. They are instead translated to 32-bit at runtime using some magic, especially using 32-bit addresses to emulate 16-bit real mode.
There is some alpha generic mips support for qemu ( https://www.linux-mips.org/wiki/QEMU ), so it could be a set of patches to run IRIX on Qemu's generic MIPS machine emulator...
One thing I've always been curious about: is there any sort of clear continuity of architecture or design patterns between the games in the Super Mario series? Yes, they're probably all from-scratch rewrites of the engine, but could each successive engine be said to be a "descendant" of a previous one, on a design level?
One thing I know (and can be seen in this repo) is that SM64 emulates a version of the NES/SNES "Object Attribute Memory", as a pure-software ring-buffer. (I'd love to know whether that carries on to later titles like Galaxy, 3D World, NSMB(U), Mario Maker, etc.)
Super Mario 3D World's architecture goes back to Super Mario Sunshine. Some parts go back all the way to Super Mario 64, but not the object / actor management. The ring buffer isn't really emulating OAM, either.
You can trace the evolution of "LiveActor" all the way through until it ends up in Super Mario Odyssey.
This architecture was so successful it ended up as the basis for all new Nintendo game development, so Breath of the Wild, Pikmin 3 and Splatoon, Mario Maker all use this new "Actor Library", or "al".
I have not looked at NSMBU, but NSMBWii uses a different core structure originally developed (as far as I know) by the Zelda team. I think it's mostly phased out these days, as is the set of "egg" libraries developed by the Mario Kart: Wii and Wii Sports teams.
> The ring buffer isn't really emulating OAM, either.
I mean, you're right, it's not a literal implementation of OAM in the sense of controlling the same things OAM controls. I was speaking kinda metaphorically.
NES/SNES OAM was useful for reading back entity physics data (because it gave objects X/Y position registers) which meant that developers (incl. Nintendo themselves) often chose to rely on the OAM-object "components" of a entity as the canonical handle for tracking the entity in the game physics (Rather than having a table somewhere in work-RAM of separate "physical" components for entities.) Games like SMW literally just index a table of actor behaviors off the OAM-object's name-table data; what an entity "is" from the game's perspective, is determined by what it currently looks like!
Since the OAM had a finite size, this reliance on OAM for tracking entities forced games into a structure where entities' lifetimes are coupled to the lifetime of their OAM-object representations. Which meant that every NES/SNES game relying on OAM to track entities needed an algorithm for dynamically allocating OAM-object slots to entities; and so, for evicting entities if OAM was exhausted. (Level design was done with a hard eye for avoiding OAM "thrashing" by keeping entities spaced apart, but the system still needed to be able to handle the case where mobile entities ended up following you and piling up.) Which brought into existence the common OAM LRU cache-eviction algorithm—i.e., the practice of "despawning" the oldest off-screen entities when new on-screen entities need OAM slots.
This determined a lot about the design of these NES/SNES games. It made mobs in these games into things that would lose their state whenever they were scrolled "far enough" off the screen; which in turn forced a design where—rather than a level just running a "start script" that would spawn entities at initial positions, tracking them in RAM from then on—you instead had adopt a hybrid approach where entities had both an OAM-object representation, and also an associated "spawner" (usually existing just as static level-data in ROM, though sometimes coupled to a bitflag tracking destroyed spawns) that would trigger [re]spawning for the entity.
SM64 is essentially "emulating OAM" in the sense that it assigns entities handles in a fixed-sized buffer, and then uses a very OAM-like logic (basically, "memory pressure" on this buffer) to decide when entities should be de-spawned; and then uses spawners to recreate entities that have been de-spawned due to this memory pressure (meaning that most entities don't "exist" until you get close enough to them.)
SM64 didn't need to do things this way; the N64 has enough RAM to track all the entities in every SM64 map at once, IIRC. They chose to impose this constraint artificially, in order to continue to build SM64 levels according to the design philosophy they had "discovered" due to the original constraints of the OAM system.
Later games in the Mario series, if-and-when they choose to have this de-spawn/re-spawn tracking feature†, are essentially "pretending to have OAM", but not really emulating it the way SM64 does. For example, Mario Maker de-spawns entities when they're scrolled sufficiently far off the screen, in a way that mimics OAM sufficiently well that re-spawning and enemy spawner semantics still work—but which isn't really an OAM-like system, in that there's no static buffer with memory-pressure causing de-spawning (and in fact, as long as the entities are willing to squeeze into one visual screen, existing entities will never be forced to de-spawn.)
† You could get a very interesting analysis of the way Nintendo probably internally divides/project-manages the Mario games, by just determining which titles "emulate" OAM the way SM64 does; which titles loosely mimic OAM, like Mario Maker; and which titles don't even bother with de-spawn/re-spawn tracking at all, but instead have persistent physical entities that just "go quiescent" when they're out of sight. (IIRC there's no Mario title that uses the fourth option—pure view-frustum culling of distant models that continue to "tick" while culled.)
> developers (incl. Nintendo themselves) often chose to rely on the OAM-object "components" of a entity as the canonical handle for tracking the entity in the game physics
I don't know how the SNES worked, but AFAIK most NES games did not track objects in this way. Instead, the game engine maintained its own buffers containing object state and copied necessary information to OAM every frame.
OAM only stored graphics state for the rendering hardware, which is not a convenient form for the game engine for a number of reasons. For instance, objects are nearly always composed of several OAM sprites placed next to each other, objects that are not visible during a given frame are not present in OAM, and a single animated object can switch between so many different graphical forms that it would be complicated to identify which object corresponds to a graphics tile from OAM. Additionally, OAM doesn't have extra room for non-graphical object state (like behavior timers or velocity information).
Semi-offtopic, but you've clearly spent a lot of time studying Nintendo's code, from a range of eras... I'd be curious to hear, if you had to make a very broad assessment, how would you rate the quality of Nintendo's programming?
Nintendo is quite clearly second-to-none on the design/creative end, how much does that translate to the technical aspect of game development? Speaking purely in terms of software.
I find this particularly interesting in the context of a company that appears to retain many of the same programmers today as they did 30 years ago, when software development was obviously much different.
Super Paper Mario uses an extremely similar engine as Paper Mario: The Thousand Year Door, which uses a slightly modified version of the Paper Mario 64 engine.
Intelligent Systems seems to have a good head on their shoulders for code reuse. Enough so that I would suspect that their Fire Emblem and Advance Wars series—when they were releasing concurrently—were the same engine underneath.
(Side-note: I've always wondered how the mini-games in IS's WarioWare series work—whether each game is entirely custom code, or whether they've come up with some sort of DSL for specifying reflex games. If the latter, I would bet that that has a decent genealogy too.)
Well, they made a game where you can make your own microgames (D.I.Y.), and I believe an Iwata Asks revealed it was basically a dumbed down version of the internal tools they had been using, at least for the earlier DS WarioWare game (Touched.) Not sure if that quite answers your question, but I would bet it's some kind of DSL interpreted by a microgame engine.
Super Paper Mario's movement felt quite similar to Thousand Year Door, which was to its detriment as the former was a platformer and the second was an RPG.
FTFY. I don't think the RPG elements of SPM should be ignored; the game plays very differently to any of the other Mario platformers.
It may not be to everyone's tastes but to simplify the matter for the sake of a quick jab is hugely unfair, especially given it has one of the most touching stories in the Paper Mario canon.
Ring arrays are so useful it would be unheard of if those games did did not use them, regardless if it is ES/SNES "Object Attribute Memory" or something equivalent. Every game today and then "should" have one or more ring array in them, but sometimes a junior dev or one in a crunch will use a linked list in rare situations. A notable example is when Starcraft used a linked list that caused a difficult to reproduce bug when certain parts of the code were threaded. http://www.codeofhonor.com/blog/tough-times-on-the-road-to-s... (Found @ https://news.ycombinator.com/item?id=5751702)
It's not that it's a ring array; it's that it's a fixed-size ring array with an eviction algorithm, and specifically one that holds representations of entities, where the entity is considered to be destroyed in a semantic sense if it gets evicted from the ring array.
Picture a background jobs system like Sidekik/Resque. Imagine that one worker-node of this jobs system had a fixed-size ring array of jobs it had taken. Now imagine that you could push new jobs onto a specific node. And now imagine that the worker-node responded by not just overwriting one of the filled slots of the local jobs set, but actually ACKing said job to drop it from the global job-queue system. It's destroying a real entity with persistent global identity, in order to reclaim the slot that the local representation of that entity takes up.
That's what OAM is, when combined with the design pattern I'm talking about. It's a ridiculous system that'd never fly in a business; but it happens to work for games, where you control the world such that you can make the world hold "reminders" for the state you destroyed.
That makes a lot of sense as it's not like they had a lot of 3D engines for the N64 during launch lol. Wonder if Pilot Wings (for example) also shares similar rendering pipeline.
PilotWings 64 was made by a separate company (Paradigm), who used a very different structure for their games which feels a lot more "western" to me (the UltraVision 64 "engine" has a large structured data chunk which it reads a lot of stuff from; most Nintendo 64 games don't really have that sort of structure)
I really don't think you can refer to these games as using different 3D engines. The 3D capabilities are ingrained in the N64. The SNES likewise didn't have any 2D engines (except maybe for when the extension chips were used). Perhaps what we're talking about are the game logic engines.
"Miyamoto: We were using the Super Mario 64 engine for Zelda, but we had to make so many modifications to it that it's a different engine now. What we have now is a very good engine, and I think we can use it for future games if we can come up with a very good concept. It took three or so years to make Zelda, and about half the time was spent on making the engine. We definitely want to make use of this engine again."[1]
I wonder if Nintendo shared source code with 2nd parties, like Rareware. I know they provided design consultation on Banjo-Kazooie, but perhaps they also provided source code?
This is cool and illegal. What makes me envy of the West (or countries other than Japan in general) is that this kind of attempt is somewhat condoned and praised, while in Japan there would be a vocal outcry and finger-pointing campaign (with some media exposure) to the point where the author would be forced to shut down the project. It's a blessing that people can pursue things like this, and it's a huge shame that Japan is such an anal when it comes to a marginally illegal activity in an open space. (I'm sure some people do it underground though.)
> It's a blessing that people can pursue things like this, and it's a huge shame that Japan is such an anal when it comes to a marginally illegal activity in an open space.
I've noticed spillover effects into Japanese gamers as well -- people being suspicious of or derisive about mods, even when they're perfectly legal and the game has built-in mod support (looking at you Monster Hunter World).
My (Japanese) girlfriend is on the very conservative side of the spectrum there and absolutely hates it when I bring up any kind of modding, and so do her friends -- the culture of "authorial intent is king" is very strangely strong for a culture that also appreciates and enjoys doujin.
Doujin works are made with the awareness that they are parodies of the original work. It does not alter the body of the original work in any way and, as the term itself means, self-published. It is made without any direct affilation in regards to the original work.
Yeah and the way the author hedged this risk is by releasing it all at once. Nintendo may shut it down or even bring the author to court but the project is already complete. As long as just one person keeps a copy it will continue to exist and Nintendo can't do anything against it.
Your use of the term “open space” is interesting. The Comic Market could probably be considered a closed space but 600,000 annual attendants at a convention that glorifies and commercializes copyright infringement (to a good extent) suggests that there’s spaces in Japan for this sort of thing.
Doujin works organically grew underground before the internet era. I think the sole reason that doujin work is now somehow tolerated is that they're not minority anymore. They're big enough to gain public acknowledgement, but if a similar activity is attempted today by a much smaller group, they would be crushed by the public. It sucks to be a minority in Japan.
I am looking forward to the mods that this will enable. I highly recommend trying mario 64 on dolphin EMU at 1080P with a texture pack. A HD mod that added a few more polygons would really round out the experience.
As silly as it may seem to use an emulator to run another emulator, Dolphin makes it quite easy to create and load custom textures, so it's a solid choice in this instance.
N64 emulators are all pretty bad (inaccurate, use a decent amount of resources) and upscaling is relatively expensive. At least, way too expensive for an rpi to handle.
The question is now, would it be possible for someone to make a port of Mario 64 that runs on the Pi, instead of trying to emulate it?
Usually after you get source releases to games, you get people that port them to different platforms. Like how we had Doom on iPods and Kodak digital cameras.
N64 only supported up to 240p, or in rare cases, 480i, which is basically the same thing computationally. Displaying on higher resolution just involves scaling (or up-sampling, but at that kind of resolution jump scaling is probably more appropriate).
I haven't tested N64 games on a RPi personally, but I imagine it would have no trouble with it, and there seem to be several retro-gaming projects that involve N64 games and use the Rpi.
I got a pi 3b+ as I romanticized the idea of it, but it struggles a bit with SM64. I just use OpenEmu on my higher-powered laptop and HDMI out instead.
Pi format is still fun to tinker with and I encourage you to get one if you're at all interested. The 3b+ just wasn't the right tool for the job in my case. I haven't tried the pi 4, however.
This is the "official" release, where someone from the team that was working on the decompilation is making it public rather than just a random person on the Discord.
But not much has changed, I guess it's hard to make progress in a month.
If I remember correctly, some time ago I saw a video from someone who managed to build a substantial part of SM64 as a native executable and was able to verify that tool-assisted runs ran perfectly on in it (hence it being accurate). The video displayed the game as a wireframe and had no audio, since those parts are surely tied to the N64 hardware.
I can not figure out the right keywords to find it again, but you may be able to if you are interested.
"To answer your questions, yes: This is a full source code which can be recompiled with modern toolchains (gcc: ive done this already) and even target other platforms (PC) with quite a bit of work. There already exists some proof of concept wireframe stuff."
You'd need to emulate/simulate/shim all the graphics calls and state changes, but that shouldn't have any bearing on the actual code architecture. In fact, given that Dolphin uses a JIT, you could argue that this already happens to some degree when you're playing Gamecube games, having the source just allows ahead-of-time compilation.
It's interesting there are bits of code that don't have a purpose, and may have been there to support a second player. For example here:
> This is evidence of a removed second player, likely Luigi.
> This variable lies in memory just after the gMarioObject and
> has the same type of shadow that Mario does. The `isLuigi`
> variable is never 1 in the game. Note that since this was a
> switch-case, not an if-statement, the programmers possibly
> intended there to be even more than 2 characters.
I vaguely recall reading that the multiple characters in SM64DS were a feature that was cut from the original game. Am I hallucinating or did Nintendo say that somewhere?
(The additional characters in the DS remake were horribly unbalanced, so I wonder if the earlier implementation would have been better...)
It's a decompiled result, it's incredibly unlikely the comments are from the original code, rather they'll have been placed there by the people doing the decompile.
pokered and its derivatives have been on github for many years. As long as it stays to a small scope Nintendo seems content to let these small projects be. That could change at any second though. Clone while you can.
The copyright status of explicitly decompiled source is still unproven in the US, as far as I know. SAS vs World Programming seems to indicate that decompilation followed by reimplementation over a wall is probably not infringing, but I don't think a case has been tried around the direct output of a decompiler (i.e. OpenRCT, this SM64, etc.).
I didn't look through the repo, but given that the linked README talks about needing an original version of the ROM in order to extract assets, I would guess they're not in there?
What an awesome project. I would love to mess with random stuff like whirlpool strength and see what it does to the game. Efforts like this to make the decompiler output intelligible e.g. meaningful variable names make this much more approachable for a technical person like me without much of the niche platform-specific reverse engineering skills. In fact there are countless games I'd love to dive into like this.
Train a ML system on a range of parameters (whirlpool strength) until you have a decent port of the game to a neural network and/or tree-based algo. Then try to optimize the game based on people’s enjoyment.
Outside of a few audio and PAL routines (see asm/non_matchings), everything that was written in C has been decompiled back into C. There are a few routines written in ASM, like the boot code and some of the SDK code.
Most of the other "assembly" files are for data, like the level scripts. It's not assembly of machine code.
You could try to put those into C, but you're not gaining much--assuming that it's even something that can be represented in C without a bunch of fancy compiler specific tricks. You'd be better off creating a DSL or a custom program suite, which is probably what Nintendo was doing 25 years ago.
The real effort here is cleaning up the assembly, and as you mention it's nowhere close to being done, but it keeps getting posted every once in a while. Here's another post from a month ago: https://www.reddit.com/r/programming/comments/cbvl6l/super_m...
Binary executables are machine code. One step from machine code is assembly. It’s easy to translate machine code to ASM (because you are just reversing the op code and adding data structures) from there it gets hard because compilers do all sorts of tricks to create performant assembly and throws away hints about code structure (e.g a simple overloaded function may become an ASM routine with 30 parameters depending on how it’s called. Or vice versa.. it’s like trying to recreate HD video from mpeg-1, entropy has been thrown away). So decompiled code is usually left in assembly. Sometimes an effort is made to create the C equivalent but that’s a maddening effort.
More than likely SM64 was written in c with some critical performance parts in ASM (like mode7 and some of the OAM stuff other threads talk about).
I’m saying this same decompile process could indeed be done to any released game where the source code is lost, because that is effectively what happened in this case.
I think you misunderstood what the parent was saying.
On the technical level, you are correct in that in both scenarios the end result would be the same, as you are going from compiled code to decompiled code.
What I believe the parent is saying, is that applying this to Panzer Dragoon accomplishes more (on the human level), because devs of that game don't have the original source code anymore, while Super Mario 64 devs do.
Not a native english speaker, but this seems like a nitpicky non-issue to me. How is it not the same as "could you please pass me a glass of water" vs. "would you please pass me a glass of water"? Both indicate a request rather than talking about actual physical ability to perform the action.
Also, I would agree more with your point if the parent said "It could be great if this could be done..." instead of "It would be great if this could be done". The first "would" seems to indicate to me pretty clearly that the parent was talking about a request rather than ability.
> Not a native english speaker, but this seems like a nitpicky non-issue to me. How is it not the same as "could you please pass me a glass of water" vs. "would you please pass me a glass of water"? Both indicate a request rather than talking about actual physical ability to perform the action.
in the case of something like the glass of water, "could" makes the sentence more indirect, and more polite.
the original post is "it would be great if [huge task undertaken by unspecified persons] could be done". this native speaker would not attempt to polite-ify a request for something like that (and i don't think other native speakers would either), so the original post can't be making a request. it is expressing a hope that the thing is possible. mburns (reasonably) then explains that it is possible. then codesushi42 sort of goes on the rails, and i can't figure out what they're attempting to convey at this point.
Huh? I wasn't making an appeal to anyone, so your point is moot.
Context is important. Why bother decompiling a game if you have the source already? Of course I meant decompiling games for which there is no source code available on any machine. Nintendo has the source for SM64.
It isn't pedantry. Expressing a wish that something "could" be done is ambiguous. "Could" is both used as you originally intended and as an expression of ability. It's not pedantry to misunderstand, and it's not pedantry for GP to explain why the misunderstanding occurred.
A misunderstanding occurred. The misunderstanding was clarified, acknowledged, and explained. I'm not sure it contributes anything to make accusations of pedantry.
> Both indicate a request rather than talking about actual physical ability to perform the action.
That's why I said I didn't think it was incorrect, but merely ambiguous. The use of "could" could also be interpreted as talking about the physical ability to perform the action. That was precisely how mburns seemed to have interpreted it. My comment was merely trying to clarify your explanation with a simpler version that tried to eliminate the ambiguity that was probably the source of the confusion.
I got an upvote for that comment. Maybe it was them and it did work.
> "It could be great if this could be done..."
That sounds like one wouldn't be sure if it would be great or not. I don't think anyone meant or interpreted that.
This and the comments below are missing the source of this disagreement. The way that conditionals are most commonly structured in English has rapidly shifted over the last 10 years. The simplest way to explain is with examples.
Old style: "If I had tried, I would have succeeded."
New style: "If I would have tried, I would have succeeded."
The extra "would" style used to be restricted only to adding strong emphasis, as in "if you would just LISTEN to me...". Slowly, this extra "would" has crept into other areas, like replacing the subjunctive as in the example:
Old style: "It would be great if this were done."
New style: "It would be great if this would be done."
The new style is "incorrect" English as of a couple of decades ago, but its usage is increasing. It still sounds terribly wrong to my ear, but what determines whether grammar is "correct" is the way in which people actually speak.
They've gone a lot further than that: assigning names to tons of stuff, adding comments, organizing all the files, and setting up a full build process so you can recompile a new ROM with modifications.
Is the problem that there's no point or that you're being flippant, dismissive, and too lazy to see the point? Have you even taken a look through the code?
Let me just say this. Although not a complete restructuring, it's a TON more readable than a bog-standard decompilation of the ROM. This is something you'd know if you spent 10 minutes reading it.
It's more an indicator that apparently either not many meaningful things happen (which I doubt) or somehow people use this a some retro-wanking where they imagine "Uh yeah back in high school I was also working on stuff like this, those were the days".
What I am saying is that it doesn't surprise me people do these projects, what surprises me is that enough people care about them for it to make the front page of HN.
I don't get it... If you acknowledge that there are reasons why people would be interested in participating in this project, then why wouldn't people be interested in reading about it?