Sergiu-Radu Pop designed the Transformable Antarctic Research Facility to mimic its jagged icy surroundings. The facility is meant to serve fields which require study in a very specific extreme environment, and even contains a large docking bay making eco-tourism possible. (read more…)
is there a reason to write asm still? like is the speed boost significant enough to matter still now that c compilers have gotten better and better over the years?
there are lots of reasons to write assembly and none of them have to do with C compilers. if you are porting an operating system to a new CPU architecture, invariably the very first code you will write will be in assembly. bootloaders are machine specific.
HOWEVER
i like to illustrate the idea of man battling the compiler with the legend of john henry
john henry was a folklore legend, a giant of a man with incredible strength and stamina. he worked building railroads in the olden times, pounding in ties one by one. a man showed up one day, claiming his steam engine could drill through solid rock at a rate unmatched by any man. john henry took him to this challenge, and with a 20lb hammer equipped in each hand, competed against the steam machine
he won, only to die in his revelry from over-exertion.
that is kind of what like writing raw assembly to beat a compiler is like today. sure, in some small cases, you could absolutely outperform a compiler. it’d be hard work and you’d have to know the CPU in and out (and i mean like, know every single stage of the CPU pipeline, what subcomponents are implicated in which pipeline stage, etc.)
you could do it, but you’d want to kill yourself afterwords. there’s a reason we don’t write in lisp or assembly anymore for most tasks.
when we start to talk about very large programs that encompass hundreds and hundreds of libraries and such, compilers will win for two reasons. the first (and most pragmatic reason) is that people don’t write large, high-level abstract programs in assembly. you’d spend literally decades doing a job that would take a month. your program would be a fucking feat of strength, but it’d be completely inpenetrable by anybody else but you. you can barely understand what the fuck someone else was trying to do with their C code, let alone assembly.
the more interesting reason is that compilers now can make really calculated judgement calls based off the performance metrics of completely separate & and seemingly unrelated entities that will, in the end, have an impact on performance. these are things you would literally never think of, even if you were dennis ritchie himself.
there are things hand-written in assembly in order to get the fastest, most efficient implementation, though. the most prominent example are encryption algorithms and hashing algorithms. these are generally very small and contained mathematical operations that can be boiled down to asm very easily, and implementing them by hand in asm is a necessity. this is because such hashing/encryption/checksumming algorithms are meant to be standardized (like sha-128/256, chacha-20, blowfish, etc) and ported to every operating system trying to stay current. they’ll be used by hugely powerful computers to do everything from securing your facebook account to securing the united state’s secrets, so by virtue of them being run all the time the net performance you squeeze out of your implementation will be multiplied by a trillion
also, you are usually competing against other algorithms in things like the NIST’s competition in which the winner’s algo is declared the national standard
Real Programmers write in FORTRAN.
Maybe they do now,
in this decadent era of
Lite beer, hand calculators, and “user-friendly” software
but back in the Good Old Days,
when the term “software” sounded funny
and Real Computers were made out of drums and vacuum tubes,
Real Programmers wrote in machine code.
Not FORTRAN. Not RATFOR. Not, even, assembly language.
Machine Code.
Raw, unadorned, inscrutable hexadecimal numbers.
Directly.
Lest a whole new generation of programmers
grow up in ignorance of this glorious past,
I feel duty-bound to describe,
as best I can through the generation gap,
how a Real Programmer wrote code.
I'll call him Mel,
because that was his name.
I first met Mel when I went to work for Royal McBee Computer Corp.,
a now-defunct subsidiary of the typewriter company.
The firm manufactured the LGP-30,
a small, cheap (by the standards of the day)
drum-memory computer,
and had just started to manufacture
the RPC-4000, a much-improved,
bigger, better, faster — drum-memory computer.
Cores cost too much,
and weren't here to stay, anyway.
(That's why you haven't heard of the company,
or the computer.)
I had been hired to write a FORTRAN compiler
for this new marvel and Mel was my guide to its wonders.
Mel didn't approve of compilers.
“If a program can't rewrite its own code”,
he asked, “what good is it?”
Mel had written,
in hexadecimal,
the most popular computer program the company owned.
It ran on the LGP-30
and played blackjack with potential customers
at computer shows.
Its effect was always dramatic.
The LGP-30 booth was packed at every show,
and the IBM salesmen stood around
talking to each other.
Whether or not this actually sold computers
was a question we never discussed.
Mel's job was to re-write
the blackjack program for the RPC-4000.
(Port? What does that mean?)
The new computer had a one-plus-one
addressing scheme,
in which each machine instruction,
in addition to the operation code
and the address of the needed operand,
had a second address that indicated where, on the revolving drum,
the next instruction was located.
In modern parlance,
every single instruction was followed by a GO TO!
Put that in Pascal's pipe and smoke it.
Mel loved the RPC-4000
because he could optimize his code:
that is, locate instructions on the drum
so that just as one finished its job,
the next would be just arriving at the “read head”
and available for immediate execution.
There was a program to do that job,
an “optimizing assembler”,
but Mel refused to use it.
“You never know where it's going to put things”,
he explained, “so you'd have to use separate constants”.
It was a long time before I understood that remark.
Since Mel knew the numerical value
of every operation code,
and assigned his own drum addresses,
every instruction he wrote could also be considered
a numerical constant.
He could pick up an earlier “add” instruction, say,
and multiply by it,
if it had the right numeric value.
His code was not easy for someone else to modify.
I compared Mel's hand-optimized programs
with the same code massaged by the optimizing assembler program,
and Mel's always ran faster.
That was because the “top-down” method of program design
hadn't been invented yet,
and Mel wouldn't have used it anyway.
He wrote the innermost parts of his program loops first,
so they would get first choice
of the optimum address locations on the drum.
The optimizing assembler wasn't smart enough to do it that way.
Mel never wrote time-delay loops, either,
even when the balky Flexowriter
required a delay between output characters to work right.
He just located instructions on the drum
so each successive one was just past the read head
when it was needed;
the drum had to execute another complete revolution
to find the next instruction.
He coined an unforgettable term for this procedure.
Although “optimum” is an absolute term,
like “unique”, it became common verbal practice
to make it relative:
“not quite optimum” or “less optimum”
or “not very optimum”.
Mel called the maximum time-delay locations
the “most pessimum”.
After he finished the blackjack program
and got it to run
(“Even the initializer is optimized”,
he said proudly),
he got a Change Request from the sales department.
The program used an elegant (optimized)
random number generator
to shuffle the “cards” and deal from the “deck”,
and some of the salesmen felt it was too fair,
since sometimes the customers lost.
They wanted Mel to modify the program
so, at the setting of a sense switch on the console,
they could change the odds and let the customer win.
Mel balked.
He felt this was patently dishonest,
which it was,
and that it impinged on his personal integrity as a programmer,
which it did,
so he refused to do it.
The Head Salesman talked to Mel,
as did the Big Boss and, at the boss's urging,
a few Fellow Programmers.
Mel finally gave in and wrote the code,
but he got the test backwards,
and, when the sense switch was turned on,
the program would cheat, winning every time.
Mel was delighted with this,
claiming his subconscious was uncontrollably ethical,
and adamantly refused to fix it.
After Mel had left the company for greener pa$ture$,
the Big Boss asked me to look at the code
and see if I could find the test and reverse it.
Somewhat reluctantly, I agreed to look.
Tracking Mel's code was a real adventure.
I have often felt that programming is an art form,
whose real value can only be appreciated
by another versed in the same arcane art;
there are lovely gems and brilliant coups
hidden from human view and admiration, sometimes forever,
by the very nature of the process.
You can learn a lot about an individual
just by reading through his code,
even in hexadecimal.
Mel was, I think, an unsung genius.
Perhaps my greatest shock came
when I found an innocent loop that had no test in it.
No test. None.
Common sense said it had to be a closed loop,
where the program would circle, forever, endlessly.
Program control passed right through it, however,
and safely out the other side.
It took me two weeks to figure it out.
The RPC-4000 computer had a really modern facility
called an index register.
It allowed the programmer to write a program loop
that used an indexed instruction inside;
each time through,
the number in the index register
was added to the address of that instruction,
so it would refer
to the next datum in a series.
He had only to increment the index register
each time through.
Mel never used it.
Instead, he would pull the instruction into a machine register,
add one to its address,
and store it back.
He would then execute the modified instruction
right from the register.
The loop was written so this additional execution time
was taken into account —
just as this instruction finished,
the next one was right under the drum's read head,
ready to go.
But the loop had no test in it.
The vital clue came when I noticed
the index register bit,
the bit that lay between the address
and the operation code in the instruction word,
was turned on —
yet Mel never used the index register,
leaving it zero all the time.
When the light went on it nearly blinded me.
He had located the data he was working on
near the top of memory —
the largest locations the instructions could address —
so, after the last datum was handled,
incrementing the instruction address
would make it overflow.
The carry would add one to the
operation code, changing it to the next one in the instruction set:
a jump instruction.
Sure enough, the next program instruction was
in address location zero,
and the program went happily on its way.
I haven't kept in touch with Mel,
so I don't know if he ever gave in to the flood of
change that has washed over programming techniques
since those long-gone days.
I like to think he didn't.
In any event,
I was impressed enough that I quit looking for the
offending test,
telling the Big Boss I couldn't find it.
He didn't seem surprised.
When I left the company,
the blackjack program would still cheat
if you turned on the right sense switch,
and I think that's how it should be.
I didn't feel comfortable
hacking up the code of a Real Programmer.
you are in the mid 1980s and computers are in a very strange place. personal computers had not quite caught on yet & the idea of having a personal computer in your home is a foreign concept to most people
most people at this time understood computers as extremely large and expensive mainframes that resided exclusively at businesses to be used for professional and commercial purposes. one would take a relatively cheap console, which looked like an ordinary CRT monitor, and connect it to the mainframe which might reside many rooms or floors away. everyone had their own console and keyboard (and possibly mouse, if they were fancy) but shared the same physical hardware. everyone connected to The Computer
this worked fine for most purposes, but was proving to be unsatisfactory for computationally intensive industries, such as the ones that design and simulate aircraft, or the ones that need to sequence entire genomes. most burroughs or IBM mainframes were built to scale horizontally, meaning they’d better serve 100 consoles performing minor tasks such as data entry or text processing rather than 10 consoles churning out integrals describing how air would flow under an aircraft’s wing. as performing computations like that is not as easy and straightforward as it is now (no matlab or solidworks), there was also a demand for systems that could be easily introspected, debugged, and changed by individual engineers, which was something mainframes couldn’t easily do as one mistake by one engineer could bring down a system serving dozens
thus enter lisp machines
lisp machines are, quite literally, like no other computers ever produced before nor like any produced after. lisp machines were made by (pretty much) a single company (symbolics), and their heyday was in the 80′s-90′s. although they contributed a great number of ideas and concepts that carried with more modern computers, the hardware itself was never to be replicated in any capacity again. they were extremely high-quality and specialized machines.
lisp machines, from the bottom up, are entirely predicated upon the lisp programming language. the hardware is built to suit kernel/system software written in lisp, and all subsystems/programming environments/etc available to the user are written in lisp. this incredibly tight coupling of hardware/software provided what was described to me as “truly the best computers ever made” by someone who had hand-wrapped Z80 boards in the MIT AI labs at one point.
“they were easy to debug, since i had memorized the Z80′s pinout”
within the lisp machine’s editor, zmacs, one could write code, compile code, debug code, debug the system and debug the hardware. the debugging interfaces went down the the microcode which is much farther than you could manage even today. the amount of engineering effort that went into these machines far outweighs what you might get in a modern computer, and that’s something i don’t think the world will ever see again
very, very few of these machines were made. adjusting for inflation, a lisp machine would run you about $200,000. the software (which was treated like intellectual property on the scale of nuclear energy systems or other such deadly serious things) would easily run into the millions for licenses. the computer in front of you was probably purchased for a sum between $100 and $1000, and it represents $100 to $1000 worth of engineering. the kind of computer representing hundreds of thousands to millions of dollars of engineering are, well, pretty slick
symbolics lisp machines are, with very little argument, at the top of the food chain when it comes to collectible computers. if you collect computers, and you are enormously lucky, a lisp machine would be the last computer you would ever get. they are the holy grail
and your boy here fuckin got one
through an incredible deal between myself and some friends in high places, we purchased 3 of these machines from a very respectable man whom i dealt with directly. what you see above is a symbolics 3620, which we got two of. the other machine was a symbolics 3640. it weighs about 200lbs:
this one is being freighted to the curator of a museum, for what it’s worth. what drew me to lisp machines initially was their keyboards (not my images):
it was a dream of mine to one day own one of these keyboards, and now, well, uhh:
selling even one of these in today’s market would recoup the cost of everything we purchased, the van rental & gas between here and boston, the cost of freighting hundreds and hundreds of pounds of equipment from new york to washington state with enough left over to purchase a brand new macbook
the situation was as follows: an older eccentric millionaire type had lots of this equipment rotting in his basement from years past. it needed to go, and he didn’t care much about scoring big for it. the people paying the market price for all this stuff would certainly mistreat it as they’d just flip them to profit. he wanted to see them go into loving hands who would restore them and treat them properly, but mostly wanted to see them go out of his basement. there was a lot of equipment besides the lisp machines.
here is a color CRT monitor made to go onto a united states battleship:
here are a hell of a lot of tapes full of the aforementioned intellectual property worth more than i care to find out
here’s a hilarious mouse:
this all happened last weekend. right now i am chiefly concerned with freighting all the equipment that is not mine (everything but one complete symbolics 3620 machine with console/keyboard/mouse, and that battleship monitor) to its respective owners/museum curators
then we begin the long and arduous task of restoring it to a working state. so far, i have plugged in my machine & powered it on which resulted in all the proper lights & such coming on. incredibly good news as it indicates the backplane is intact and functional; the backplane is the only irreparable part of all of this. as expected, the machine tries to read the disks which are crap. that is not problem as they are emulatable via a special adapter i have to make & solder that plugs into a normal SATA hard drive
the machine ran for 24 hours until i turned it off, meaning the power supply is good. running it for this long caused the capacitors, which are very flaky electrical components that are quite susceptible to failure over time, to reform
the next steps are to build that hard drive emulator & fix the CRT console which will certainly take months. i will keep you updated on this incredible piece of history
Finishing up on the first batch of some mini-essays about games, feels, and other things dear to me, when I realise I only have two followers, one of whom is probably dead and the others an author!
Finally got to have a go with one of those oculus thingies the kids are raving on about. We used hacked drivers to give the older games some love, eventually got to System Shock, jacked into cyberspace, shit was lost…
I’ve seen the future it looks like the 90’s
EDIT: We just spent the entire night going through old games, to try and find a better vomit comet sim, we did…
I’m all for beaten-up old scientific equipment being temporarily repurposed for dumb programmer tricks, and this doesn’t disappoint. Old vector arcade games like Asteroids are like catnip to me due to that un-emulatable glow from the display, so seeing a relatively modern game rendered using similarly bright traces is fascinating!
Sons of Seraph, the followup to Cadence, is on the plate soon. I’m aiming for an early to mid 2017 release.
Synopsis: In an unusually cold autumn 2259 the peace and quiet of Cadence is shattered by a vicious terrorist attack. The group claiming responsibility calls themselves “The Sons of Seraph.” A newly reformed “Go! Group” is shaken by the news that the rogue android is stalking the city once more, this time with a small cult-like army in tow. With Sam missing and the Cadence Defense Organization locked in a war with the Sons, the Go! Group is left to seek answers on their own. Will they get their closure or will the city succumb to the Sons of Seraph?
The album will once again feature a range of tracks that pair up with story chapters and artwork. It continues the story started in Cadence, five years after the first story ends. Keep your eyes and ears open for announcements and samples coming later this year!