javasaurus: (Super Java!)
[personal profile] javasaurus
Naive question, no?

Seriously, if I'm curious about the on-the-chip workings of a computer, I'm not sure where to even begin finding information. Or even simpler, if I wanted to understand how pre-microprocessor computers worked, where would I even begin? Maybe a kid's electronics kit? "build your own calculator" or something?

Date: 2007-03-15 11:14 pm (UTC)
From: [identity profile] acroyear70.livejournal.com
if you're still curious next time i see you, i'll loan you my "TTL" book. Transistor-Transitor Logic is the key to all chips and from there to all computers.

it all starts with transistors, assembled into logic gates, assembled into humongous collections of "xnor" operations.

a basic flow of electrons triggers gates to activate based on how much is flowing through. if enough flow through, it "opens". this in turn sends electrons to other gates that may or may not open. if it is closed, it is "grounded" just like an electrial plug in an appliance.

from there its a matter of constructing a gate that opens if some combination of two inputs is "full" - AND, OR, XOR, XNOR are the only 4 possibilities. build that (which you can with a decent set of legos!) and the rest is easy. in TTL we learn to build those gates with (if i recall correctly) 3 transistors total.

yes, a program is mostly "if" statements, but just like creation stories, its turtles (or "ifs") all the way down.

dynamic memory is merely a gate that, instead of being open based on a particular flow, is open or closed until a particular flow CHANGES. LCD screen controllers work the same way - hit it with a pulse and it's lit. hit it again and it's off. but in those two cases there's another flow still going on to hold onto that state.

"everything is XNOR" - it is possible to build a memory "bit" with exclusively xnor logical operations. it's possible to build just about any logical operation (including and, or, and xor) with just the right combinations of xnor. bit shifting (multiplication by factors of 2) and addition are both operations that can be built using these gated operations, and just about everything else mathematical can be built from those two. yes, division's a bitch (and why there were specialized floating point chips until the Pentium merged it into the main CPU just 'cause they needed *something* to do with all that extra space) but it really is possible with just these little xnor gates. a couple hundred of 'em, but there you go.

when chips were pricey, the way to go to minimize the number of chips you needed. This type of consolidation of operations was Steve Wozniak's genius. the disc controller for most S-100 computers of the day needed 35 chips to work. Woz built Apple ]['s with only 7.

this seems (and is) counterintuitive. in programming we use logic to reduce operations to their simplest form. in TTL you use logic to reduce it down to the fewest number of operations which can all be consolidated en masse on a single chip, then aim for the fewest number of chips to accomplish a task.

flash memory, rom, eproms, and the like all have magnetic properties that retain that memory without the flow. you ping it with an "address" to get its state.

timing circuits control the "pulse" and define an instant so that operations don't overlap. a pulse sets memory, the next pulse reads it, the next pulse processes it. literally the flow of electrons is pulsing at all times. that's the "hertz" rate (now in gigahertz) that they talk about when they talk about processor speed and bus speed. the bus is merely the speed of flow between the CPU and all of the hardware.

its limitation is that the farthest machine must be able to get its data back within the time of a pulse lest it bleed into a subsequent operation. the physical limit is the speed of light in an ideal system, but really its impurities in the silicon flow that limit it - we're faster because we're better at building silicon chips at the rate that Moore defined in his infamous law 35 years ago.

bubble computers basically just use very large transistors before we figured out how silicon works. before that, computers were mechanical, trying to be variations of the clock and thus were only capable of adding and multiplying.

back in the day, they used to require CS students to know this stuff and encouraged it for physics students as well. today? many CS programs are trade schools, training for programming jobs and little else. it's a rant I've had building up in my head pretty much since '92 when JMU's requirements changed.

Date: 2007-03-15 11:21 pm (UTC)
From: [identity profile] acroyear70.livejournal.com
(note: my "pulse" model was considerably simplified - most memory operations can take between 8 and 20 pulses to complete, most of it being address resolution. the system has to have "stuff to do" while waiting for memory to change. registers are high-speed memory that changes more quickly because they use smaller address spaces to access; most are built into the main CPU).

Date: 2007-03-16 03:13 am (UTC)
From: [identity profile] javasaurus.livejournal.com
This helps a lot, thanks!

Just having a name (TTL) to call this topic is extremely useful -- it means I have a starting point at the library (I still have borrowing privs at my alma mater).

Date: 2007-03-16 03:14 pm (UTC)
From: [identity profile] blueeowyn.livejournal.com
You may want to read some of the links in Sil's alternative journal as well. Or ask a certain Jedi fan who does some circuit work. That said, had I seen this before Acroyear70, I would have said most of the same stuff (we had to learn VERY basic circuitry in ENES390)

Profile

javasaurus: (Default)
javasaurus

June 2012

S M T W T F S
     12
3456 789
101112 13141516
17181920212223
24252627282930

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 12th, 2025 12:06 pm
Powered by Dreamwidth Studios