Not only is the "back bias" "helpful", it is necessary to prevent the transistors from conducting when they shouldn't, which could lead to a latch-up and catastrophic failure; hence the power sequencing requirements of these ICs.
(If you search online you'll find discussions such as https://electronics.stackexchange.com/questions/455745/why-w... where the accepted answer is close but not quite correct; -5 is substrate bias, 5 is the main supply, and 12 is applied to all the gates of the enhancement load transistors to open them. On the 8080, 12V was also used to drive the clocking circuitry, and this explains why the absolute maximum rating on all the pins --- and thus the gate oxide withstand voltage --- is 20V instead of the 6-7V typically seen for 5V devices.)
>"You might wonder how a charge pump can turn a positive voltage into a negative voltage. The trick is a "flying" capacitor, as shown below. On the left, the capacitor is charged to 5 volts.
Now, disconnect the capacitor and connect the positive side to ground. The capacitor still has its 5-volt charge, so now the low side must be at -5 volts.
By rapidly switching the capacitor between the two states, the charge pump produces a negative voltage."
That is a strange and weird effect of capacitors!
I never knew this, before this post!
I'll have to experiment with this weird but interesting effect in the future!
Well, it's absolutely not weird. Capacitors are happy to change current instantaneously but not voltage. It's in their spec: i=CdV/dt. If you charge a capacitor to a particular voltage, and then disconnect the capacitor entirely from its circuit, it still holds that voltage. Swap "+" and "-" and you have a negative output. Couple many capacitors together in an array of switches and you can make nearly any voltage you please, but integer multiples are easiest.
Most all modern rs232 ports use a similar approach to generate the negative voltages needed as well. The ubiquitous max232 and its copies/clones all use charge pumps to convert a +5v rail to +/-12v rails for the line drivers.
Take a look at the max233 cross-sections by @TubeTimeUS. It looks like a normal DIP integrated circuit, but it has four discrete capacitors in the IC package for the charge pumps.
You can also make positive doublers: long ago I made an EPROM programmer (needs 21 V from 12V) using a logic driven capacitor voltage doubler- basically this circuit:
Not really, except at a high level as capacitors and inductors both store energy. Capacitors store charge as a steady voltage while inductors store magnetic field as a steady current, which results in a different circuit topology.
If you want to learn how switching regulators work, start by studying the basic "boost converter".
I don't think it's useful to think of capacitors as bad batteries. The only point I was trying to make is that a component doesn't have an inherent "ground", so you can reference one pin to any voltage in the circuit.
Capacitative adding and subtracting is how many ADC (analog to digital converter) and DAC (digital to analog converter) chips work. There are many architectures, but capacitor arrays that are binary weighted are common, by switching them you can create or modify a voltage for a DAC, or use this to zero in/match this to the ADC input value. A common architecture that does this is called SAR ADC [0], which basically uses a DAC as part of the ADC, and it is often a capacitive DAC.
What I didn't get in the article is how this produce constant negative voltage.
From what I understand with my low amateur electronic skills, this would produce a scare signal with -5V/0V, at whatever quick pace.
That would average at -2.5V if we put a low pass filter.
This would work for me with 2 charge pumps, but I only see one large capacitor
The substrate has some capacitance to hold the voltage steady. The diode blocks the 0V so the substrate only "sees" the -5V.(It acts as a peak detector, not a low pass filter.)
I was actually thinking about a short blog post on the 8086's clock circuit. The short answer is that the clock signal into the 8086 goes through a bunch of inverters to strengthen and shape the signal, and then two-phase clock signals (i.e. clock and clock') wind around the chip.
Many processors of that era used pass transistors for temporary latches between circuits. The 8086 also used dynamic logic for gates, where instead of a pull-up resistor, the output would be precharged during one clock phase and then the gate would pull it down (or not) during the other clock phase.
How interesting. I didn’t think differential clocks where a thing back then.
Or maybe they’re not differential and they’re just routed together because it’s convenient. Since they store with coupled inverters I can see why having both phases could be useful in some flip flop implementations.
The two clock phases don't overlap, and act to "pump" the signals through the circuits on each half-cycle. Circuits from that era effectively use both edges of the clock, and the use of simple pass transistors instead of full flip-flops simplifies the implementation.
Charge pumps are still found in many places, including the high-side of an h-bridge (or half h-bridge) driver, as that allows for using only NMOS for switching.
I assume this isn't a joke, but rather refers to handfitted prototypes working, where a massproduced item would not without significant engineering: if you tweak it until it works, it works.
I interpreted it to mean that the on-paper design can hang together nicely, until it's instantiated in some hardware, followed by the "Oh, yeah. . ." moment.
Again, the number filter mangled the title. The full title from the article is "Inside the 8086 processor, tiny charge pumps create a negative voltage". In the interest of brevity, "processor" could be omitted or replaced with "chip."
Hi Ken, love pretty much everything you write. Was wondering, is it correct to assume that all of the 8086 would have been laid out entirely by hand? Certainly special blocks like these charge pumps must have been, but was there any sort of design automation for things like the recently discussed register file?
I think it was a mixture of hand layout and automated layout. There are a lot of places that do strange things to squeeze out every last bit of space, and I think this optimization was done by machine.
For instance, long signal traces have a lot of twists and turns to be as close together as possible while avoiding obstacles and satisfying the design rules. Sometimes these signals do strange things like switching from poly wires to metal wires and back just so they can be a bit closer. These micro-optimizations don't seem like ones a human would make.
Another thing I've noticed is between two revisions of the chip that have identical layout except for a few traces. On one chip a trace will have a jog that's two 90-degree turns. On the other chip, the same trace will have 45 degree bends. So the two traces are identical except for this short segment. I assume that the automated layout algorithm changed slightly, resulting in this change.
On one chip a trace will have a jog that's two 90-degree turns. On the other chip, the same trace will have 45 degree bends.
If the 45 degree one is the newer version, that might be a DFM optimisation - there is common folklore around PCB design that 90 or acute angles can cause various problems, with some hint of truth to it, but I suspect there are similar constraints in photolithography. If the sharp corner was causing yield losses, a newer revision would definitely address that.
Both chips have a mixture of 45 and 90 degree bends, so it's nothing that obvious. About 98% of the paths are the same and then there are these changes with no discernible pattern.
Electrons in a typical circuit have a drift velocity that is very low (order of millimeters per hour). Of course, the electrons individually are very fast, but they are also almost as fast when there is no voltage applied.
“Chip” is actually a term of art in the industry, used to describe ‘single passives’ (such as resistors and capacitors) in small SMD packages. There are machines called “chip shooters” which are designed to place those ‘chips’.
Using the term “chip” to describe integrated circuits is amateurish.
Perhaps so in the context of PCB assembly.
In the semiconductor industry we still call them chips.
So do Micron Semiconductor [1], TSMC [2] and Intel [3]
(If you search online you'll find discussions such as https://electronics.stackexchange.com/questions/455745/why-w... where the accepted answer is close but not quite correct; -5 is substrate bias, 5 is the main supply, and 12 is applied to all the gates of the enhancement load transistors to open them. On the 8080, 12V was also used to drive the clocking circuitry, and this explains why the absolute maximum rating on all the pins --- and thus the gate oxide withstand voltage --- is 20V instead of the 6-7V typically seen for 5V devices.)