Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Inside the 8086 processor, tiny charge pumps create a negative voltage (righto.com)
129 points by zdw on July 25, 2020 | hide | past | favorite | 47 comments


Not only is the "back bias" "helpful", it is necessary to prevent the transistors from conducting when they shouldn't, which could lead to a latch-up and catastrophic failure; hence the power sequencing requirements of these ICs.

(If you search online you'll find discussions such as https://electronics.stackexchange.com/questions/455745/why-w... where the accepted answer is close but not quite correct; -5 is substrate bias, 5 is the main supply, and 12 is applied to all the gates of the enhancement load transistors to open them. On the 8080, 12V was also used to drive the clocking circuitry, and this explains why the absolute maximum rating on all the pins --- and thus the gate oxide withstand voltage --- is 20V instead of the 6-7V typically seen for 5V devices.)


>"You might wonder how a charge pump can turn a positive voltage into a negative voltage. The trick is a "flying" capacitor, as shown below. On the left, the capacitor is charged to 5 volts.

Now, disconnect the capacitor and connect the positive side to ground. The capacitor still has its 5-volt charge, so now the low side must be at -5 volts.

By rapidly switching the capacitor between the two states, the charge pump produces a negative voltage."

That is a strange and weird effect of capacitors!

I never knew this, before this post!

I'll have to experiment with this weird but interesting effect in the future!


Well, it's absolutely not weird. Capacitors are happy to change current instantaneously but not voltage. It's in their spec: i=CdV/dt. If you charge a capacitor to a particular voltage, and then disconnect the capacitor entirely from its circuit, it still holds that voltage. Swap "+" and "-" and you have a negative output. Couple many capacitors together in an array of switches and you can make nearly any voltage you please, but integer multiples are easiest.


Most all modern rs232 ports use a similar approach to generate the negative voltages needed as well. The ubiquitous max232 and its copies/clones all use charge pumps to convert a +5v rail to +/-12v rails for the line drivers.


Take a look at the max233 cross-sections by @TubeTimeUS. It looks like a normal DIP integrated circuit, but it has four discrete capacitors in the IC package for the charge pumps.

https://twunroll.com/article/1286136812658806785#


Very cool -- I always wondered how they integrated those caps into the package


You can also make positive doublers: long ago I made an EPROM programmer (needs 21 V from 12V) using a logic driven capacitor voltage doubler- basically this circuit:

https://circuitdigest.com/electronic-circuits/voltage-double...


Is this also how switching transformers work?


Not really, except at a high level as capacitors and inductors both store energy. Capacitors store charge as a steady voltage while inductors store magnetic field as a steady current, which results in a different circuit topology.

If you want to learn how switching regulators work, start by studying the basic "boost converter".


The way I see it, this isn't really about capacitors. You could do the same thing, not practically but conceptually, with a rechargeable battery.


In many ways, a capacitor is just a shitty battery

I think conceptually any circuit element with reactance/memory would do, just a matter of cost and spec


I don't think it's useful to think of capacitors as bad batteries. The only point I was trying to make is that a component doesn't have an inherent "ground", so you can reference one pin to any voltage in the circuit.


If you use extremely low leakage switches and capacitors, this technique can be used to add and subtract voltages at very high precision: https://www.analog.com/media/en/technical-documentation/data...


A capacitor adder, or rather, accumulator! And analog too, not digital!

I love it!

Just don't load this accumulator with $FFFFFFFFFFFFFFFF -- as that will generate a voltage so high it will fry the whole system! <g>

No, I'm just kidding! (That was just an attempt at humor! <g>)

On a serious note -- I think the idea is both fasinating and meritorious!


Capacitative adding and subtracting is how many ADC (analog to digital converter) and DAC (digital to analog converter) chips work. There are many architectures, but capacitor arrays that are binary weighted are common, by switching them you can create or modify a voltage for a DAC, or use this to zero in/match this to the ADC input value. A common architecture that does this is called SAR ADC [0], which basically uses a DAC as part of the ADC, and it is often a capacitive DAC.

[0] https://en.m.wikipedia.org/wiki/Successive-approximation_ADC


What I didn't get in the article is how this produce constant negative voltage. From what I understand with my low amateur electronic skills, this would produce a scare signal with -5V/0V, at whatever quick pace.

That would average at -2.5V if we put a low pass filter.

This would work for me with 2 charge pumps, but I only see one large capacitor


The substrate has some capacitance to hold the voltage steady. The diode blocks the 0V so the substrate only "sees" the -5V.(It acts as a peak detector, not a low pass filter.)


Kens, I love the series on the 8086. For the next article can you go over how Intel did clock distribution?


I was actually thinking about a short blog post on the 8086's clock circuit. The short answer is that the clock signal into the 8086 goes through a bunch of inverters to strengthen and shape the signal, and then two-phase clock signals (i.e. clock and clock') wind around the chip.

Many processors of that era used pass transistors for temporary latches between circuits. The 8086 also used dynamic logic for gates, where instead of a pull-up resistor, the output would be precharged during one clock phase and then the gate would pull it down (or not) during the other clock phase.


How interesting. I didn’t think differential clocks where a thing back then.

Or maybe they’re not differential and they’re just routed together because it’s convenient. Since they store with coupled inverters I can see why having both phases could be useful in some flip flop implementations.


https://en.wikipedia.org/wiki/Gated_latch#Master–slave_edge-...

The two clock phases don't overlap, and act to "pump" the signals through the circuits on each half-cycle. Circuits from that era effectively use both edges of the clock, and the use of simple pass transistors instead of full flip-flops simplifies the implementation.


Charge pumps are still found in many places, including the high-side of an h-bridge (or half h-bridge) driver, as that allows for using only NMOS for switching.


> 3. Prototype designs always work.

Looked jolly awesome on the whiteboard, though.


I assume this isn't a joke, but rather refers to handfitted prototypes working, where a massproduced item would not without significant engineering: if you tweak it until it works, it works.


I interpreted it to mean that the on-paper design can hang together nicely, until it's instantiated in some hardware, followed by the "Oh, yeah. . ." moment.


Again, the number filter mangled the title. The full title from the article is "Inside the 8086 processor, tiny charge pumps create a negative voltage". In the interest of brevity, "processor" could be omitted or replaced with "chip."


I was wondering why the title ended up so bizarre. In any case, I'm the author if anyone has questions on the 8086 internals.


Hi Ken, love pretty much everything you write. Was wondering, is it correct to assume that all of the 8086 would have been laid out entirely by hand? Certainly special blocks like these charge pumps must have been, but was there any sort of design automation for things like the recently discussed register file?


I think it was a mixture of hand layout and automated layout. There are a lot of places that do strange things to squeeze out every last bit of space, and I think this optimization was done by machine.

For instance, long signal traces have a lot of twists and turns to be as close together as possible while avoiding obstacles and satisfying the design rules. Sometimes these signals do strange things like switching from poly wires to metal wires and back just so they can be a bit closer. These micro-optimizations don't seem like ones a human would make.

Another thing I've noticed is between two revisions of the chip that have identical layout except for a few traces. On one chip a trace will have a jog that's two 90-degree turns. On the other chip, the same trace will have 45 degree bends. So the two traces are identical except for this short segment. I assume that the automated layout algorithm changed slightly, resulting in this change.


On one chip a trace will have a jog that's two 90-degree turns. On the other chip, the same trace will have 45 degree bends.

If the 45 degree one is the newer version, that might be a DFM optimisation - there is common folklore around PCB design that 90 or acute angles can cause various problems, with some hint of truth to it, but I suspect there are similar constraints in photolithography. If the sharp corner was causing yield losses, a newer revision would definitely address that.


Both chips have a mixture of 45 and 90 degree bends, so it's nothing that obvious. About 98% of the paths are the same and then there are these changes with no discernible pattern.


In the suspected auto layout sections does it follow a modern standard cell convention?


It's about as far as you can get from standard cell. Every logic gate is different with the transistors shaped to fit in their surroundings.


That’s crazy. It seems like it would make the software so much harder but I guess it would get the density up. I wonder how they closed timing.


Electrons fly off as they try to take the 90 bend.


There is a bit of truth to that --- sharp corners are more prone to electromigration:

https://en.wikipedia.org/wiki/Electromigration#Via_arrangeme...


Electrons in a typical circuit have a drift velocity that is very low (order of millimeters per hour). Of course, the electrons individually are very fast, but they are also almost as fast when there is no voltage applied.


Modern tools pale in comparison to hand laid out designs, so if something is very smart to save area, it's unlikely it was machine created.


Followup question: when did we start describing these things with GDSII files (was there a GDSI?) rather than giant sheets of acetate?


Fixed now. Thanks!


“Chip” is actually a term of art in the industry, used to describe ‘single passives’ (such as resistors and capacitors) in small SMD packages. There are machines called “chip shooters” which are designed to place those ‘chips’.

Using the term “chip” to describe integrated circuits is amateurish.


Perhaps so in the context of PCB assembly. In the semiconductor industry we still call them chips. So do Micron Semiconductor [1], TSMC [2] and Intel [3]

[1] https://www.micron.com/foundation/semiconductors

[2] https://www.tsmc.com/english/dedicatedFoundry/technology/SoI...

[3] (pdf) http://download.intel.com/pressroom/kits/chipmaking/Making_o...


And the usage goes back to at least August 1975 (likely earlier):

http://archive.6502.org/datasheets/mos_6501-6505_mpu_prelimi...

"but including an on-chip clock ... in addition to the on-chip clock"


Two unrelated papers in AFIPS '65 use ‘chip’ in their abstracts, in the integrated circuit sense, and neither draws attention to the term.

https://dl.acm.org/doi/10.1145/1463891.1464005 https://dl.acm.org/doi/10.1145/1463891.1463961


Tell that to the IEEE https://www.hotchips.org


Wrong.


Nice article




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: