Tiny Core Base > TCB Talk

Oldest Pc

<< < (5/6) > >>

Rich:
Hi CentralWare

--- Quote from: CentralWare on December 05, 2024, 04:37:51 AM --- ... If having 40 pins means "architecture" to you; there are issues. ...
--- End quote ---
It doesn't, that's packaging. The architecture refers to the  execution unit  (EU) which
both CPUs share. The  bus interface units  (BIU) do differ.


--- Quote --- ... (The 88 requires Min/Max and High/Low byte registers to determine which half of the 16-bit value it's attempting to emulate...  the 86 has none of this. ...
--- End quote ---
I'm not sure what you are referring to here. Both have a Min/Max pin that gets tied high or low.
About  "High/Low byte registers" , are you referring to AH and AL, BH and BL, etc. ?
Because both CPUs have them. Just like they both have AX, BX, etc. The 8088 can execute a
MOV AX, MEM  instruction. The BIU handles the write as 2 separate 1 byte transfers.


--- Quote --- ... You just said it yourself...  8 bit bus...  versus the 86's 16 bit bus
"The architecture and instruction sets of both chips are identical." ...
--- End quote ---
That's the external bus. Both chips use a native integer size of 16 bits and have
16 bit oriented instruction sets.
32 bit Pentiums had 64 bit data busses. Does that make them 64 bit?

CentralWare:
How about we send the debate up-stream?
= See Photo =
Let's put the argument to rest, shall we?
Obviously, not even Intel knows what they're talking about!

Rich:
Hi CentralWare
OK.

DHeadshot:
I'm hesitant to weigh in on this, but the distinctions of 8bit/16bit/etc. are a little meaningless anyway, aren't they?  The 8086 had a 16-bit databus, a 20-bit address bus, an 8-bit minimum register size and a 16-bit maximum register size.  The Z80 had an 8-bit databus, a 16-bit databus, an 8-bit minimum register size and (arguably) a 16-bit maximum register size (at least, it could treat HL like a single register, like the 8086 could treat AH & AL as AX).  The 8088 had a multiplexed 8-bit databus, but in the same way that the EDUC-8 had a multiplexed 1-bit databus (it was a serial system).  There are arguments here to class the 8086, 8088 and Z80 as all 16-bit or all 8-bit systems, depending on what you pick.  Meanwhile, ARMv6 has everything as 32-bit, so is more straightforward, yet the ARMv6 is like chalk and cheese with the i386 architecture, which is called 32-bit.  I don't feel like the bit-length distinctions are any more than a guide (or at worst, marketing bluster - see, for instance, the console bit-wars).

Anyway, that's my tuppence, but probably not worth much...

CentralWare:
@DHeadshot: I appreciate your take on registers vs. busses and the i386 comparative enticed a grin :)

I agree with @Rich that from a programming aspect, the "bit" capacity of a processor should be the general register count as more times than not, a software developer doesn't have to concern themselves how the processor gets from A to Z, just that it's been told to do so and they otherwise wipe their hands of the task. However, from a hardware perspective Intel just had to introduce a "cloning" method in their internal services to basically recode an 8008 processor, if you will, to utilize the 8086 instruction set.  This required a buffering system that took eight bit words and suspended them (latches) while waiting for the second word to complete the "sentence" generally speaking.


Here's a motherboard from an IBM-5150 (XT/8088; missing the optional Math-Co)
Immediately to the right of the processor you'll find a number of SN74LS373 chips (8-bit latches) and their entire job in life is to help an eight bit processor speak in a 16 bit world by the processor either pushing eight bits at a time, one to each of two latches OR external hardware filling the two latches simultaneously (assuming 16 bit hardware) and the processor then reading each of the latches, one at a time.  Nine bit parity ram, though... that was a nightmare.

Regardless, from a hardware perspective, talking from the outside world (such as an extension card) into the processor was able to be done in 16-bit "speak" but timing and Machine Cycles were always calculated as an eight bit device since it's the max it could cough up in a single cycle.

Think of it like this...
You plug in a network card...  the NIC has to be able to keep up with data packets on the wire AND interact with those which are intended for it.  "Easy Peasy!" one thinks.

Our NIC gets a packet and triggers an interrupt for the processor saying "Hey, bartender, I need some service!" and usually the next byte(s) being sent home are ready to be processed as soon as the "green light" comes on.

NIC then dumps its 16 bits into the latches, not knowing there are split latches to begin with, and goes high to signal it's ready to process.  NOW, on the 8086 the next cycle comes back with an "acknowledge" signal.  On the 88, however...  it's dead silence, as only half of the content has been digested at this point.  For networking, this usually trips a HALT and RESEND to get prepared as the silence can be predicted as a dropped packet/segment.  As such, "predictive" nature of hardware had to be modified to detect whether or not "wait cycles" were required to "hurry up and wait" for virtual processors.

Most everything detailed above (yes, I have kids...  the analogy works for all ages! :)  and yes, I started "family" rather late in life! The IBM above was my beloved!) is done outside of the programming detail unless the hardware is specifically created to be probed for that level of detail -- which "most" of the time was not the case as it costs extra to design, produce, etc.


--- Quote from: DHeadshot on December 08, 2024, 11:01:24 AM ---...but the distinctions of 8bit/16bit/etc. are a little meaningless anyway, aren't they?
--- End quote ---
From a programming aspect, they're important just to know which instruction set you're working with.  If you're not interacting directly with outside hardware, you'd never know the difference.
From a hardware aspect...  it's vital.  The people who designed the above motherboard had to go that extra mile turning an 8-bit (one lane road) into a virtual 16-bit (two lane road) with all of the limitations imposed of being a single lane.  (Thus why the numerous latches, FFs, etc. so that the middle-man hardware made it easier for programmers to not have to differentiate between the 8088 and 8086 EXCEPT on direct hardware projects where those differences mattered if timing or response time/cycles were essential.)


--- Quote from: DHeadshot on December 08, 2024, 11:01:24 AM ---...chalk and cheese...
--- End quote ---
i386 was, in my opinion, a glorified 286 with 32 bit instruction coded in using the same methodology as the 88 vs. the 86.  "With enough latches and FFs to hold data and build it up into larger words, we can pretend to have a 32 bit processor long before we actually produce a real one!  AND find a way to spend the unused silicon from the 286 batches!"

The ARM processors... for me...  fall into that category of "Those are so adorable!"  The Raspberry silicon (ie: RP2040) even more so.  We had a spool of 2040s which JUKI choked on half way through the project so there's a short strip of processors hanging on my workbench's whiteboard (maybe 10-12 chips) but just to think of those itty-bitty (6-7mm?) things could run circles around our 808x bricks of the past :)  Okay, run may be a strong word...  quickly walk said circles?

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version