ADC Linear Gain/Offset Correction

In my last post I explored the errors of unprocessed ADC data. In this post, I will show what happens if you add some simple gain + offset correction.

Arduino INL

Gain error is caused by DNL. The ideal code width is 1, but lets say that there is a predictable DNL error of +.1 LSB- that means that for every code (and there could be a lot of codes!), the measured value strays from the ideal value by +.1 LSB. At the end of the range, the difference will be +.1LSB * number of codes. This (in the example) is equal to INL. Take a look at the INL of the arduino above- the average code width is 1.08, so the INL increases by .08 LSBs/code. Therefore the INL looks like a straight line when plotted against LSB.

Corrected INL plot for Arduino

If we just assume each LSB is a little bigger, we will get a much lower INL. This will cause us to loose a little voltage resolution (.08 LSBs), but we will end up with a MUCH lower INL. This is the same data, but with gain correction. INL was reduced from 85 LSBs to about 6 LSBs.

However, the INL can still be improved- at low voltages, the most ADCs dont work very well. This causes a large DNL at the first code, as seen here in the corrected plot. This is where the line shoots straight up from 0 to about 6 at the first ADC code.

Arduino uno adc near code 0. orange line is the end of adc code 0

On a plot of DAC input-ADC output, it looks like the above – basically code 0 has a huge width. We can null this out by just adding a small offset to all the data. That means code 0 will still be wrong, but the rest of the codes will be right. because only the first code is messed up, we can usually ignore this error because we it only exists for a single code (code 0).

MUCH better

Here is the DNL for the fully corrected ADC- the max DNL across the whole ADC is closer to .8 now, except at code 0, which is off by about 6LSBs. This is equivalent to saying we cant measure less than 6 LSBs, but that if we make these corrections we will have an accuracy that is about .00084 mV (with 100x oversampling). That is pretty darn good.

What about the other ADCs?

How much can the other ADCs be improved? The ADS already has a pretty perfect gain and very little offset, so I skipped trying to fix it. Note the new LSB size and standard deviation are different than the previous values – to avoid the influence of the extremes of the ADCs, I only used the middle 90% of the ADC range to calculate the average LSB size.

ADC NAMELost Range
New LSB size
Compensated INL
INL Improvement
Compensated INL
ESP (COMP)1971.00±.7123.8*129.2.006
*this is actually much worse

Surprisingly, the ESP and the SAMD are much closer in this comparison. SAMD still wins out, but at least the ESP32 has recovered slightly from having a three-digit INL. The one caveat here is that my calculation of code widths does not account for missing codes, which produce -1 DNL each. This data is correct if the missing codes on the ESP (of which there are 15) need to be ignored for this to be valid.

Arduino, ESP32, SAMD21, ADS1015 ADC Comparison

Hopefully my catchy title helps people find this post. After spending some time building a tools to compare A/D converters, I wanted to share some results from popular micros and previously popular (but now out of stock) ADCs.

Single Reading Accuracy/ENOB

ENOB, effective resolution, noise free resolution – these are all ways people describe an adc. I’ll provide the ENOB based on IEEE 1057, which is described as:

ENOB = log2 [full-scale input voltage range/(ADC RMS noise × √12)]

I will also list how good a typical measurement is from 5-95% of full scale. This comes from the average standard deviation across all codes. In theory, this is equal to the ADC RMS noise. I like this number because it tells me the typical LSBs of noise from the converter. However, because I didn’t test per-code the type of distribution, it does not necessarily mean that the noise distribution is gaussian.

If the noise distribution does not behave in a gaussian way, the predicted oversampling will not be accurate, as you can see here. The orange is data oversampled at 16x, and the dark purple color is the predicted standard deviation.

Here is a histogram for a random DAC input code from the ADS1015. As you can see most of the codes are code 400, so oversampling wont help much- the average will not improve much because most of the time code 400 will be returned.

Compare this to the SAMD standard deviation plot:

Here the predicted and actual 16x measurements start to line up pretty well round code 40000. If we look at code 42000:

The distribution looks a lot more gaussian, and covers many codes- in this case averaging can help, and the predicted error lies up nicely with the measured error.

ADC NameENOBSTD.DEV. (LSB)STD.DEV. (Ideal volts)
SAMD21 16x10.4.89.00021

Unsurprisingly, the ADS1015 is the best in terms of ENOB. Since the ADS is actually an 11 bit ADC (it reserves a bit for the sign, but it never uses it in single ended mode). It costs as much as any of the other chips, but it has only one job: being an ADC. The ESP32 really is pretty bad, as it loses about half its effective bits to noise for a given reading. The atmega comes in at a respectable 9.7 bits! The resolution is lower than the ADS, but much better than the ESP.

The SAMD21 ADC is what I would like to use. looking at the standard deviation chart, it looks like 16x oversampling is reasonable to do. This would give similar performance to the ADS1015 in terms of noise.


a plot of a wide code

The ideal code width for an ADC is 1, and the DNL is the error from the code. If we have consistent code sizes, it will be easy to compensate for, but codes that represent an extra-wide or extra narrow voltage range are more difficult to compensate for. If the DNL is -1, the code will be missing (the narrowest width a code can be is 0). Missing codes are very bad, because its hard to tell if they are missing or just really really narrow. If you map the adc and compensate by ignoring that code, there will be an LSB of offset when it reappears.

The most interesting stats here are the average code (this better be about 1) width and the standard deviation. If code widths are all about the same (even if they are say, 1.2 on average), it’s good, because each step is a predictable size. If the widths are all different, its hard to tell how much the measured voltage changed with 1 LSB difference.

plot of code widths- outliers are in orange

I flagged any codes that were larger than 2 standard deviations from the mean. For most ADCs this only catches high (wider) codes since if the standard deviation is >.3 LSB, 3 standard deviations below the mean is a missing code (DNL = -1).

INL was calculated as the sum of the DNL. Max INL is the maximum error you expect to see from a straight line. Positive and negative DNL cancel out, but very high DNL mid-range is hard to compensate for because you need a lot of small negative codes to get rid of it- and while positive DNL can go to any number, INL can only go to -1 before the code disappears.


High DNL at the start or end can be compensated for- basically, you just add a DC offset and dont use those codes. If the LSB error is consistent, or if steps are mostly slightly wide/short you get a gain error, which can also be compensated. However in the middle, it is hard to compensate for because there is a big jump. In theory you can just note that measuring a specific code maps to a wide range of voltages, but that makes the ADC kind of nonlinear and bad, plus you need a huge lookup table.

I took measurements with and without the gain/dc offsets applied on the ESP32 and SAMD21 ADCs. Both these micros have built-in compensation values. I also manually compensated the Arduino ADC for fun.

For Worst DNL, I only included values from 5-95% of the range, even for uncompensated ADCs. I feel those numbers are a better metric for how good an adc is mid-range. If no wide codes were reported, I gave the highest code width measured between 5-95% of the range.

ADC NameAverage Width
± std dev (LSB)
Worst DNL (LSB)Wide Codes (#)Max INL (LSB)Max INL (volts)Missing Codes (#)
*the standard deviation is so absurd that this does not even matter

As you can see, the ESP32 ADC really is terrible (as promised by the datasheet). The standard deviation, DNL, and INL are all high, and there are missing codes! On the other hand, the dedicated ADS1015 is really good. The thing that surprised me here is how bad the SAMD21 is in terms of worst case DNL. It does not add up quite as high as the ESP, but its still not great, especially with all the wide codes that it has mid-range.

arduino adc INL

(Un?)surprisingly the Arduino ADC did well in overall INL, even uncompensated, compared to the ESP32. This is pretty cool, since a lot of people use it. The INL error chart shows it should be pretty easy to compensate. this is a “good” plot because it shows that on average the code width is a little wider than it should be, since INL increases in an apparently linear and monotonic fashion, with some offset near 0.


I will look at this in my next post, using the collected data. Aside from oversampling, the next adc-error correction technique is to change the assumption that 1 LSB = 1/2^n volts. If we have measured all the code widths as 1.1 LSB (in volts), then every code we are (on average) adding .1 LSB INL! This causes gain error in the ADC.

By assuming that each adc code is on average wider/thinner than 1 LSB, we can correct this gain error. The tradeoff is a change in range and in resolution- if the resolution goes up ( average LSB<1) the range will go down- we wont have enough bits to measure the whole range. if resolution goes down ( average LSB >1) the resolution will go down but the range should go up (although it may not, because ADCs are poorly behaved at high and low values).

ADC Comparison – What and How I am Testing

I built a tool to measure DC performance of an ADC. Since I have it, I figured I may as well use it to investigate the four ADCs I have lying around- an esp32, and arduino uno (atmega328), SAMD21 based feather board, and the ADS1015. This post is about the data I want to collect and analyze.

Key Performance Indicators:

The things I look for in an ADC for DC performance are a high effective number of bits (including with oversampling), good monotonicity, low DNL and INL. All of these characteristics are related, but each one gives me a feel for the ADC, and only some of these are listed in datasheets.

Here is the data I intend to capture.

Standard Deviation + ENOB

The standard deviation per DAC output code is a good metric to start with. It is important to have resolution, but if the lower bits are twiddling back and forth seemingly randomly, they are not really useful. We can measure this twiddle and find out how many bits are useful- the effective number of bits (ENOB), for a single read. Even better, that twiddle might not be random- if we look at 100 codes and average them, we should get closer to the “real” value (oversampling).

I took 100 readings with the ADC per output code. The “true” value of the ADC is probably close to the mean of these readings. I calculated several standard deviations based on this mean- the standard deviation for the population of all readings, and then the standard deviation of oversampling 4x, 8x, and 16x.

As the oversampling increases, the standard deviation of the population of oversampled results decreases. Basically since we are averaging a bunch of samples, the random error cancels out and we end up closer to the mean. The standard deviation should be reduced by a factor of 1/sqrt(n) where n is the number of samples (standard error). Here you can see the results of a run with the SAMD21. I plotted the 1x and 16x sampling, as well as the predicted 16x sampling (dark blue over orange). It is pretty cool to see the theoretical value line up with the measured value. If these don’t line up, its a sign that the errors are not random- and that means that oversampling wont actually help get close to the “true” value.

One artifact of calculating ENOB this way is that at the edge of a code, the standard deviation is going to be really high compared to mid-code. You can see this effect here- the xticks are set to be the width of an ADC code, but they are not aligned. Usually the standard deviation is pretty small, but it gets really big near what are likely code transitions. Since we usually don’t have a map of the whole ADC when we are measuring something, we have to take the worst case scenario (our measurement could be on a code). Still, averaging helps improve ENOB.


Monotonicity is related to ENOB- the output codes should be monotonic at the effective number of bits. Since my reading from the ADC is in terms of averages of 100 readings, I will use the standard deviation/10, since the standard deviation should be reduced by sqrt(100 readings). The worst case standard deviation is about 14.5 LSB, so I chose to look at the output code every 1.5 LSB ADCs of input voltage.

In theory, this should always be a positive difference- the output code should increase by at least 1 every 1.5 lsbs. However, this is not the case. There are 39 non monotonic jumps, and each of them is only a little bit bad- about 1 LSB. In terms of the code/code plot. these correspond to flat spots, or places where the codes actually decrease, or are flat, as seen in the next section.


There are two DNL errors to look for, DNL<-1, which causes missing codes, and DNL>1, which causes non-monotonic (not always increasing) behavior. Looking for missing codes is easy and is done more or less by brute force – fortunately in this in this example, there were no missing codes.

It is not surprising to have codes be wider or skinnier than 1 LSB of the ADC. However, DNL>1 causes a the ADC output to be flat for a while- this means that we have extra error that we is hard to account for. I looked for codes that were more than 3 standard deviations from the mean width, and then plotted the errors. The widest code (aside from the expected nonlinearity near 0) was an astounding 7LSBs!

output from tool showing outliers and zoomed in DNL

Max INL Measurement

To get the INL, I just took the sum of the DNL’s. Since I didn’t correct for offset in this case, there is a large jump in DNL at the start.


I now have a tool for measuring real world DC performance of ADCs. I have a few ADCs lying around, and I want to figure out what kind of defects they have and compare them!

Testing An ADC for DC Characteristics

The tester!

With the ADSXXXX series out of stock for the foreseeable future, I needed to evaluate the ADC’s on several micros to understand if they would be good enough for my application – reading oxygen sensors. Mostly I care about DC characteristics, because my signal should be changing very slowly- on the order of many seconds.

I’ve seen a lot of interesting work done on ADC characterization and calibration, but only rarely do the posts explain where the reference voltage is coming from. If the voltage source is noisy or has some nonlinearity or bias, it will throw off the calibration.

really really excellent series on adc error/calibration from

demo showing how the esp32 adc is bad

auto-cal of the esp32 adc*

The most obvious way to do this is to generate a really nice (precise + accurate) voltage, put it into the ADC, and then figure out what the expected error is for a sample or for multiple samples. With enough resolution on this voltage, I can look for missing codes (high DNL) and for error in value from code-to-code (INL), and find really good gain and offset correction factors, and come up with an idea of how good the ADC is (at DC, for some ADC settings, etc).

The trick, of course is generating the nice voltage. Since I don’t have a sub-mv programmable reference voltage, I had to buy one. Since I didn’t want to spend hundreds of dollars on a non-programmable benchtop equipment, I decided to buy a 16-Bit DAC. These chips are pricey, but on the order of a pastry and a coffee, not a nice bicycle or oscilloscope. Instead of spinning my own board, I bought an eval kit to speed things up.

The DAC8050X comes in a one and two channel flavor. I got the two channel flavor in case I wanted to make differential measurements at some point in the future. These are really nice, and offer <1LSB nonlinearity. This is really pretty impressive to me, and looking at the typical characteristics, the INL and DNL are way way lower than 1 LSB.

The DAC80502 is also pretty sweet in that it has programmable gains for both the reference and the outputs- this gives it a really wide range of outputs, without any external circuitry. It also has an external VREF in, so you can supply a voltage range you care about. Combining the two outputs with some creative-op amp-ery could give you a really wide range of voltages to play with, but I really care most ab out the sub-3v range, which this is perfect for.

*this seems somewhat sketchy given the really bad stated INL/DNL for the adc, and the fact that the DAC on the esp32 seems to be 8bit. it might make the output look linear, but I wouldnt use it for anything important

Who tests the Tester?:

When making measurements against a standard, its important to understand how good your reference is. In this case the DAC is being used as a reference, which means it needs to be very “nice”. There are many charts in the datasheet, showing that all kinds of systematic and thermal drift, referred noise, error under load, etc. The total unadjusted error appears to be around .02% typically, and the INL/DNL per code is <<1 bit, usually around .2 LSB. While these charts are all very reassuring, its nice to verify them.

To do this I connected the DAC to a nice 6.5 digit multimeter (Agilent 34465a) and captured a 5 reads at each output code, and then calculated the error per code, to verify. This turned out to be a slow process, because 2^16 (65.5k) times even a few milliseconds is a LONG time. In order for the instrument to have a high enough resolution/accuracy, I had to wait a long time for it to collect data, so the test ended up taking ~4 hours.

N.B. the 34465A is a lot faster at resolving lower voltages than the lower spec models- those would have taken even longer!

very nice!

This chart shows the measured voltage and the difference in voltage from output code-to-code in LSB. The ideal difference code-to-code should be 1LSB of voltage – 1 LSB is Vref/(2^16). A difference of 0 indicates no change in output (a missing output code) and the distance from 1LSB indicates how far from the ideal each step is (INL). A different-sign difference between codes, which we don’t see here, indicates that the output is not monotonic (increasing or decreasing constantly). Fortunately, the worst thing seen are some missing codes at the start.

NB: this happens even without auto-zero on, and the meter can read negative voltages, so these really are missing codes (DNL>1). This seems wrong compared to the datasheet, but my measurements should be good enough to capture this. That said, I wont totally rule out some small dc offset or error in measurement.

That means this DAC is pretty awesome. The standard deviation of the errors is about .073 LSbs. Its not quite gaussian (kurtosis 1.17) because it has basically no tails, but its close enough for me. As you can see the LSB error code to code (DNL) is usually less than .2, which matches with the datasheet.

NB: this result was obtained under ideal circumstances. When I worked at my computer/charged my phone right next to it and generally jostled the setup, I got non-monotonic readings an more LSB error. With short wires to the DMM and good test practices it performed much better.

Preliminary Data:

Here is a read of the SAMD21 ADC with the internal 1VREF and no gain, compared to the DAC with 1.25VREF. The total error/code is shown below in LSB. This is a lot more than the stated 15 bit total unadjusted error, but I haven’t used the built in gain/offset correction, and I’m not sure if this incorporates the samd21 adc arduino bug. As you can see, a simple offset could reduce this total error by about 40%. Interestingly, there were a couple very odd codes near the 1/3 and 2/3 points where the error quickly jumps.

Dactyl Manuform Flex PCB

My flex pcbs showed up and they are just lovely. Unlike a rigid PCB, they are a kind of coppery gold translucent color with shiny copper underneath. And unlike a rigid PCB, I had to spend hours and hours carefully routing every trace in smooth, even curves, so the whole thing just has a delightful aesthetic. They came in sheets of two (right and left) with the flexes being retained in the sheet by a few small tabs. This is a great way for them to come because once they are free of their nest, they become very flexy and floppy (as planned).

Time Savings vs Handwiring:

Wiring up a dactyl with these flexes is dead simple and fast. I estimate that it takes about an hour to put on all the diodes and to solder all the switches. I’ll time myself next time when I am not taking photos (and running to microcenter), but it would be easy for someone to do in an evening, provided you are used to surface mount soldering. My first dactyl took me multiple evenings of careful snipping, bending, soldering, stripping, and checking, I would estimate that took 8++ hours.

Assembly Steps+ Notes:

Soldering to a flex pcb is a little different than soldering to a regular pcb. The big differences are that it is very floppy, and that the pcb coverlay (kind of like solder mask) has very low thermal conductivity, and it is very thin. I did all my soldering on a heat proof silicone mat, with a normal sized chisel tip on a hakko FX888D with tweezers and no magnification. You don’t need a fancy iron, but magnification can help if you are not used to components of this size. The small size components (and orientation) are important to preventing stress on the solder joints when the flexes are flexed. Below are the steps I took to solder this thing. These steps assume you already have a keyboard with the keys installed (look at the flex to make sure you install them in the right orientation- pins should be close to the bottom of the keyboard).

A note on safety: unlike rigid boards, flex boards are springy and if they release at the right time, I’m sure they could shoot some molten solder somewhere bad- say your eyes. It seems like a very good idea to wear some eye protection while working with these.

I’m free!

1: Remove the flex from the backing sheet. This should be done by carefully pulling the flex apart instead of tearing the flex or using a knife. find each tab and pull perpendicular to the tab take your time. Once the flex is fully free, double check to make sure there are no tabs left between the flexes.

Diodes on parade!

2: Diode soldering. The cathode (marked with a line) goes on the “cup” side of the solder mask. Solder the diodes on, or if you want super detailed instructions, continue on. First, I deposited a small blob of solder on one side of every diode pad. I put the solder on the pad on my dominant hand side. Then I laid out a bunch of diodes, and lined them up so all the cathodes were facing one direction. on this board, most of the diodes are cathode-on-left so that is how I lined them up.

Just a tiny dab of solder

Once I had all my diodes ready to go, I started tacking them to the board, working towards my dominant hand. that way the iron/iron wielding hand does not have to cross over already soldered components. Once the diodes were soldered, I rotated the mat that the flex was on and tacked on the other side. if a diode didn’t look flat, I took it off and reworked it.

Like a slinkie! Note tack soldering on top and bottom buttons

3: The next step is to start to install the flex. I started on the outermost row (outside the pinky row). I simply pulled the pcb up and into the shell- it was happy to extend out like a slinkie so that some of it was outside the shell while I worked. First, I made sure that the pins from the switches went through all the holes on the pcb, then I tacked down the first and last pins with solder. Once the flex was tacked, I went through and soldered each pin to its pad, making sure to get a good connection. Once the row was done I would start the next row.

thumb cluster detail

4: Thumb cluster buttons are a little different- each one lives on its own little mini flex connection. It worked well for me to tack them down one at a time. NB the “L” peninsula buttons do not change orientation- the bottom side of the PCB should always face the switches.

5: Solder the micro. NB the island that the micro sits on is meant to be folded over, so that the micro sits on top of it. There is some text that says “THIS SIDE UP” to indicate the right side. If your micro came with headers on all the pins, and you don’t want to remove it, you can snip off the extra flap of material. The USB connector should point “down” towards the thumb cluster. The micro orientation might seem strange to some- its mean for a USB bulkhead like this one, so the “wrong” orientation is meant to let the cable have a nice service loop in the shell.

6: Program the micro. Plenty of tutorials on that, and I will have some files up soon to fix a few pin order mishaps that happened on these boards (one header is flipped). NB some kapton tape should be placed under the micro to prevent shorts.

What went wrong:

one of the pin ones is not like the other one…

Inexplicably, I flipped a single header, so there may need to be separate firmwares for the left and right hands. This is annoying, but not nearly as annoying as wiring up a whole dactyl or screwing up in some way that is not a small matter of programming (SMOP).

What’s next?

I need to test the right side and finish my second keyboard (for the office).

How do I get one?

Twoards the end of the week I will be putting the extra prototypes up for sale. If you are interested, you can submit some info here to be notified.

Hey! where are the design files?

At the moment, I have decided not to share the design files. Unlike many projects there is little to be learned from them for repair or use. The boards are literally transparent, and the schematic is the same one thats been used on pretty much every dactyl. I have decided I want a bit of a head start selling these to recoup some of my costs before I make it easy for anyone to just go and buy a grip of them and sell them and put them up for sale (however unlikely that is).

The Micro Word Clock 2021 Edition

I am planning on teaching some people to use kicad, since its my new favorite EDA tool. I searched high and low for a decent circuit that would do something cool, with a good variety (but small number) of parts. Basically something fun and not intimidating. I got hooked on formatc1702’s micro word clock. It is an excellent use of the atmega8 series unusually high current drive outputs.


left- original gyxm-778 matrix. Right adafruits luckylight KWM-20882CVB matrix

The one catch was that I had a lot of trouble finding the GYXM-788ASR LED matrix called for in the bill of materials. Fortunately adafruit sells a similar 8×8 matrix from luckylight. I tried to design around this by including both footprints, but I ended up mostly making a mess (and I still couldn’t find the 788!). Both are common cathode but the row/column nomenclature is flipped. To formatc’s credit, they did a good job with the firmware. It was easy to find pindefs.h, which let me swap around pins until I was happy. My strategy was to create a test pattern and make sure it shows up where you want it on the matrix. This was much faster than tracing every signal and creating the right pin definition the first time.

The second catch was that after programming, I couldn’t get the time to change! After glossing over the code it seemed like this must have something to do with the RTC- and after some gentle probing/touching the board it would occasionally work. Initially I attributed this to the crystal not starting up, but after many power cycles and other pokes, it seemed like the crystal would actually run just fine. As a last resort I read the datasheet, and lo and behold, the Vbat pin needed to be grounded.

I bridged these two pins

A blob of solder quickly remedied this deficiency in my pcb, and afterwards changing the time worked just fine. I suspect that sometimes the chip “just works” if that pad happens to be at the right potential on reset, but sometimes it doesn’t. The button presses update the RTC time, not a time on the micro. So if the RTC does not start up, then you can’t change the time.

Other Notes

Pin1…probably. I prefer a dot!

I used the default footprints from kicad for a lot of the parts, and the pin 1 designators are a little wishy-washy. They look more like an printing error than a clear indicator for pin 1. I guess I will get used to it instead of re-creating every part from scratch, but if I only have a few parts, throwing a dot on the PCB would go a long way during assembly.

I should have also added a polarity marking on the power connector, and a couple of i2c test points wouldnt have hurt either. Since this was a quick board just for me and the parts are big, I didn’t worry about it.

Upgrades for V2

I figured if I was going to do this board again, I may as well overdo it. I managed to cram everything into a board roughly the same size as the matrix itself, even after I added a coin cell and a USB connector for 5V power. The coin cell will keep the RTC running for about 10 years, even if it loses usb power. This way I can program it, ship it to someone, and they can just plug it in and the RTC will know what time it is. The ground plane is far from perfect but its about as good as I will get with a board this size

Since the time will basically never need resetting, the switch for changing the time is very very small. I used the NanoT switch which is about the same size as an 0805, which is very very small indeed. And because programming is now a one-time affair, I moved the programming header to castellated vias/PTH on the edge of the board. They .1″ pitch so they should be easy to solder to headers if I cant scare up a pogo pin jig for them. For some reason the ground pad shows an air wire. The 5V is purposely left floating since I don’t care about that connection.


The git repo can be found here. Its probably not ready for prime time yet, but check the readme. I will update that when its reproducible.

Integrated Dive Information and Oxygen Transmitter (I.D.I.O.T)

I.D.I.O.T wrist mounted display

Knowing your PO2 goes a long way towards making it safer to go deeper with an oxygen rebreather. If you want to go pure O2, it can be used to monitor how purged the loop is, and if you want to go a little deeper it can basically turn an O2 rig into a sort of mixed gas rebreather (or a full mixed gas rebreather with proper diluent addition).

Sensors tucked away in the counterlung. This will be switched to the inhale side of the CL.

In order to conveniently know my PO2, I have purchased O2 sensors. Having built in temperature compensation and having reliable manufacturing seems like a big plus vs fabricating, assembling, testing, QCing and calibrating my own.

For my first iteration I have started with just two cells. A third would be easy to add if this works out.

Layout and Logic

Not to scale

The electronics are going to be split into three parts- the cells/stuff in the counterlung, an electronics box, and a display. I decided that the only things in the counterlung should be the sensors themselves and a connector.

Wiring is absolutely a nightmare.

The “Electronics Box” will house the brains of the operation (an ESP32), and the battery. Batteries and other flammables will be kept outside of the oxygen rich environment of the rebreather, for obvious reasons. In the unlikely event of the battery shorting to the cells, hopefully the high impedance of the cells will limit resistive heating or fire. In the future, a USB port with a cap will be wired in for charging.

Box as tested

This box has been tested to ~80 FSW with just the cord grips+cord installed, and it passed without noticeable leaking. The cord grips are MSM-M SKINTOP connectors. They don’t seem like the should work, and yet they do. Mcmaster sells these as “submersible cord grips”. N.B. they make a face seal with the enclosure, and do not require a gland like an SAE o ring boss (ORB) fitting.

The main oring seal is a 1.5mm oring made from cord stock and superglued at the ends. You can see the join just above the middle heat set insert in this photo. Surprisingly this does not seem to create any significant leak paths, although there is always a slight possibility that I will have to eat my words on that someday.

The display will be upgraded to a HUD at some point, but for now it will be wrist mounted. It displays two PO2 cell readings, a compass heading, and (in the future) the depth. As you can see in the photo, the top row is “highlighted” to show a problem- the cells are disconnected and are reading a very high PO2.

The Electronics

EE layout

Reading an off the shelf galvanic O2 cell is dead easy, since the temperature compensation and shunt resistor are built in. However, the output voltage is fairly low, and so it should not be fed directly into the ADC of a typical micro. It is possible to read such a voltage (~20mV), but it wastes a good portion of the resolution of the ADC.

For example, the maximum output expected is 2V (representing a PO2 of 2). With a 5v ADC, we are only ever using 2/3 of the range of the ADC, which effectively limits our resolution of PO2s to 2/3 the resolution of the ADC.

Since these signals are also not amplified or buffered in any way, it seems good to keep them away from the mcu. I have resolved to put them on an I2C DAC with an internal gain stage, which will let me both maximize resolution and keep the signal wires for the cells short. To this end, I used an ADS1015 breakout from adafruit.

Since it was on hand, I also threw in an LSM303 to use as an electronic compass. Since the compass has no “inertia” it has kind of jumpy readings, but doing some smoothing should help to get it to be a little less jittery. I could also try some compensation for nearby electronics, but they seem to have little effect. The LSM accelerometer/magnetometer lives in the wrist piece, although I did consider mounting it in the “head”, which would show you body heading, but not necessarily what you are looking at.

The display is the 128×64 OLED featherwing. Its easy to integrate, and it is fairly compact in terms of “extra space” for unused headers/buttons.


Believe it or not, this was taken in 10 feet of water while the sun was still up. Zoop for backup depth gauge/dive timer

I headed to the mystical mystic lake to do some testing. The combination of near zero visibility to start with and a haze of sediment/algae/stuff I don’t want to think about made for a more-or-less night dive like conditions, even with a light, during the day. However, the little O2 cell reader and compass seemed to behave relatively well. Most importantly the firmware did not crash, and no water seemed to get in. Cant wait to test it somewhere actually fun!

Galvanic Sensor and the Science Sr.

The Science Sr. doing what it does best

In the last post I alluded to a larger pressure pot- the Science Sr. This was totally based off of the $50 Cell Checker from the wreckless diver, which is based off of a now defunct post on some other website. I’ll do my best to document what I have made here, since its an awesome tool. I’ll put a standard disclaimer on building one of these- don’t do it, it could blowup, etc, and it could really kill you, and it might hurt the entire time.

Why this could be a bad idea

The Science Sr. is based around an air filter canister. These are rated to 125 PSI…with water. they are 100% not for use with air as far as I can tell. Air, unlike water, is fairly compressible. That means if the amount of energy stored in pressurized air is huge compared to a hydraulic system at the same pressure. Filling this to 125 PSI might be fine, or it could explode due to some component being exposed to chemicals or because it has been cycled too many times or because it has been cycled too rapidly. Plastic will fatigue over time, and that will reduce the margin between exploding and not exploding.

Just like the wreckless diver, I have added an over pressure relief valve (OPV) to prevent going over ~45 PSI or so. This is not a guarantee that it will not go over 45 PSI- I have it hooked up to a scuba reg with an IP of ~120 PSI that can provide something like 100 SCFM (cubic feet/min of gas at some standard temp and pressure). That is fast enough to drain a typical AL80 scuba tank from 3000 PSI to 0 in under a minute. There is no guarantee that the OPV can keep up with this flow rate, so I am VERY careful operating the valve.

Also, in some kind of blue-moon case where my first stage reg goes haywire, its designed to fail open. This would be bad news because this will shoot tank pressure right out of all the low pressure hoses. In a dive situation this is great, because you have ~30s to breathe off a free flowing reg. In a pressure test situation this is bad, because up to 3000 psi (more for some tanks) will be shooting out of every reg and inflator hose, including the one stuck to the pressure tester, which will almost certainly explode it.

This was mitigated somewhat by testing with a mostly empty (500 PSI) tank, and by very careful control of the valve. For cell checking applications, it may be a good idea to fill this with water partially to reduce the amount of volume that is full of air, reducing the potential energy. Anyway, on to how to build it.

Science Sr. Construction

Science Sr. Assembly. Note notch/hole at 6 oclock on the blue part.

Here it is, in all its glory. On the top is basically a X manifold that houses all the important stuff. At 12 oclock is a NPT to BC hose adapter going into a ball valve. At 3 oclock is an 1/4-MPT to 1/8-FPT adapter and a 1/8 NPT pressure sensor. These are available on ebay and seem to work just fine. It seems like they are for some kind of automotive application based on the connectors, and they are available in a variety of ranges. At 6 oclock is a 1/4-MPT – 14-MPT adapter. This attaches the female pipe thread cross fitting to the female pipe thread of the canister head. At 9 oclock is a 1/4-MPT 45 PSI overpressure valve.

Close up of epoxy job

On the other end is a 1/4″ MPT- 1/8″ FPT adapter. The threads on the filter are one-time-use soft plastic NPTs, so I essentially “replaced” them with a metal 1/8″ npt. Screwed into this is a 1/8 MPT to hose barb adapter with a bunch of wires epoxied to it, creating an airtight pass through. I like the 60 second epoxies that have narrow mixing nozzles for this, because they cure FAST and the nozzle fits into the fitting well. However, they are expensive and usually only come with two nozzles, which is wasteful if you only use a few ml per fitting. Silicone seems to work ok for this, but its probably best to apply with some kind of syringe. While none of these fittings have shot out the epoxy/silicone plug yet, it is probably best to get as much plug in as possible, especially on the inside of the plug. This should create a step in the plug to make it much stronger.

It is important to install this on the side labeled IN, since there is a large pass through there, and the wires will be much easier to pull into the canister body (see the assembly picture above).

Here is a shopping list:

*I had these on hand but this mcmaster part should be equivalent

Testing and Results

After some initial testing, it seemed like I would need to return to the teflon tape as a membrane to get good sensor response. I suspect this will let electrolyte evaporate out over time (based on previous experience), but it does give a very satisfactory response time, and over short periods the sensor is serviceable enough. It was necessary to make the sensor up in the electrolyte to eliminate bubbles. This meant pouring the eletrolyte into a dish and then wearing gloves to assemble the sensor “underwater”. Excess KOH was washed off.

As a note- for this exact size of sensor, a ~510 ohm resistor seemed “right”. More discussion of that below.

Step Response

Yellow = sensor, Green = Pressure. 500mV = 1ATM

Here is an example of the step response to pressure. With the ball valve, its a little nerve wracking to throw it fully open, and I wasnt keen on testing the opv, so the pressure step here actually has a rise time of ~250 ms. The rise time of the sensor is about the same- It does not seem like this step is fast enough to elicit any kind of delay in the sensor. The pressure here goes from 1 ATM to 1.5 ATM, and the sensor rises from ~220 mV to 325 mV, which is what we would expect.

The next test was to wrap the sensor up in a bag with the inflator hose, and to shoot O2 straight into it. This should give a value for 100% O2 at 1 ATA.

Yellow = sensor

I wasnt sure how much O2 to squeeze in there or where it was really saturating, so there is a little bump in the middle as I jostled the sensor. But after spewing out a good amount of gas, it looks like the maximum value is 500mV. This is an interesting result and shows that the sensor is actually no good up to 100% O2. If the sensor were linear, the resulting voltage should be 5x atmospheric what it is in normal “air” which is ~20% O2. That would be 1V, which it does not achieve. I believe this means the cell is current limited.

yellow = sensor

Here is the other end of that test- me tearing the bag to introduce normal air back in. From the initial jostle at 10s, it looks like it takes about ~1 min for the O2 to go back to normal levels. This is not representative of sensor performance because the gas actually has to get agitated for the O2 to go anywhere. In other words, it does not purely measure sensor response, but also incorporates the gas diffusing/getting blown away.


A good sensor is a linear sensor

This is a plot of pressure (Y) vs sensor value (X). To produce this, the chamber was cycled several times from 1 ATA (.5V) to 2 ATA (1V). Cursors represent the starting point, and where the sensor “should go” at 1 ATA. Doubling pressure should double sensor value from 190 mV t 380 mV, and that is almost exactly what the line shows. Doubling the pressure represents going from a PO2 of .2 to .4. The voltage is below the 500 mV that the sensor saturates at, so it comes out beautifully linear.

Since the sensor was cycled several times, we can detect some other interesting sensor characteristics, namely a small amount of hysteresis and non-linearity. The cursor generally tracks on the right side on the way “up” in PO2/pressure, an tracks on the left as PO2 drops. This means a sensor will read slightly differently after it has been exposed to higher pressure- it has “memory”. The maximum value of this is almost 50mV!

There is also straight up non-linearity, as the shape of the curve is not straight. However, this seems to be fairly small.

A single very slow run

Here is another hysteresis example, with the rise and fall done as slowly as possible. This should give an idea of the minimum hysteresis.

Slow and fast traces

Here is an example of a slow and a fast pressurization and depressurization. I couldn’t color the two differently, but it is easy to see that the rightmost trace (which was a pressure drop) looks quite different- it drops to 0 pressure then moves tot the left to drop the voltage.

Current Limit/Resistor Size

This is the setup I have, only the amplifier is an oscilliscope

I mentioned above that I used a 510 ohm resistor as a current shunt, and that the cell is current limiting. O2 cells, while read out in mV, are actually current-producing devices. Typically this is measured by putting a shunt resistor across the cell, and measuring the voltage. With low resistances, this produces a small voltage drop which can be difficult to measure. Higher resistances produce higher voltages- but you can only go so high! This is because the cell can only tolerate so much voltage across the cell*, and can only generate so much current. Therefore it is a balancing act to find an appropriate resistance to use- too small and readings get jumpy, too big and the cell wont change value at all, since it will be current limited.

*I am admittedly fuzzy on this but you probably do not want to get close to the open circuit voltage of the cell, which is 1.2v



Building a galvanic O2 sensor is possible and actually fairly easy. I suspect even the polarigraphic sensor would have been fine if capped in electrolyte, and if it did not leak. The sensors can obviously be tuned to give good responses, as this is exactly the same way that commercial O2 sensors work. However, building a really good sensor that could be used for diving, where the sensor is monitoring life support, requires a lot of sensor characterization which is way beyond what I want to do. There’s a lot more to an O2 cell than just getting it to spit out a voltage proportional to the O2 concentration- for example:

  • Temperature compensation
  • shelf life/storage condition determination
  • repeatable assembly/manufacturing processes

Any one of these could take a month (or longer) to do, and they would require a lot of units. I will stick to commercial O2 cells for now (and in the future), but now that I have a cell checker/pressure pot it should be interesting to compare a “real” cell to home made!

Pivot to a Galvanic O2 Sensor

After a maddening time with polarigraphic sensor, I decided I would try to build the galvanic flavor of oxygen sensor. After reading this tech tip from Oakton Instruments, it seemed pretty obvious that galvanic cells have big advantages. The main draw for me was that the output was easy to measure, eliminating the need for the fancy DMM. This would also simplify the electronics needed for reading polargraphic cell.

Electronics Comparison- Galvanic vs Polarographic

Here is what I think would be needed to read a polargraphic cell- a precision buck or LDO to bias the cell, with a feedback pin at the top of the cell. This eliminates the burden voltage of the shunt resistor that is fed into some kind of stack of op amps that then produce a voltage on the other side. This might not be so bad, and given that we have a 0 drop shunt resistor, we no longer need to worry about having a tiny burden voltage.

For a galvanic sensor, its pretty much as simple as it can be- a single resistor and a high impedance amplifier to match the voltage output range to the desired ADC.


The first galvanic sensor I made just replaced the silver electrode with a zinc electrode. Platinum or gold (or likely any noble metal) makes a good anode for this system. Zinc, in contrast to silver or platinum, is a very, very agreeable metal to machine. I can easily take a millimeter or more off at a 25mm diameter. The rod I got from rotometals appeared to be cast, although without any apparent porosity after ~2mm into the diameter. The one offputting thing is that zinc fumes are toxic, and the melting point is alarmingly low~ 400C. So all the operations were done with a lot of coolant, and the soldering to the electrodes was done very gently to prevent or minimize any zinc vapors.

As you can tell from me stating that there was a first sensor, there is also a second sensor. The first sensor seemed to have the same drift problem as the polarigraphic sensor, which makes me suspect that the root cause of both sensors drifting is electrolyte loss through the membrane or leaking at the press fit of the metal to the delrin. I also wanted to increase the area of the zinc so that the electrode and the volume of the electrolyte. More zinc will alleviate any concerns about using up the electrode, and more electrolyte will reduce the impact of loosing small amounts of fluid, or bubbles. This is because each bubble or amount of lost fluid will be small compared to the sensor, since it is bigger.


Step response to a blast of O2

Much like the polargraphic sensor, it kind of works. It certainly can detect a change in the level of oxygen, but it does it in kind of a non linear way. For example, I would expect that if 20% air is ~300mV, pure O2 should be 5x that, or 1500mV. It is possible that the cell just cant generate that much current, and that I should try a smaller resistor, but I certainly have not verified that yet.

With the improved sensor body, the sensor was also a lot more stable. It dropped a few mv over a fw hours, and its hard to know if that was related to temperature, drift, or the actual O2 concentration in the room. However, this stability was achieved over ~30 minutes as the sensor reached equilibrium. Likely the O2 in the bubbles in the electrolyte needed to be used up first, as they are in direct contact with the electrolyte. I suspect that after that happened, the sensor reached equilibrium with gas diffusing across the membrane and stopped sensing O2 trapped in the sensor.

A small dip from breathing on the sensor

On the other hand, it does seem very sensitive. Breathing on the sensor produces a small dip, and there is a noticeable difference in value (~30mV) from when I sit right in front of it and breathe on it vs when I leave the room- this is mostly anecdotal but interesting.

XY plot of pressure vs voltage of sensor. Thanks scope!

The linearity is not very good, as you can see. This is a plot of the pressure transducer vs the sensed voltage. Its all over the place, but is vaguely the right shape. Ideally the sensor should trace a straight line here, but there may be some hysteresis that causes non-linearity.

Unfortunately, just like with the step response, the change here should be much bigger. This test pressurized the sensor from 1 bar to roughly 6 bar- the reading should be about 6x as big, but it only went up a few mv! So this is not that impressive, as it shows either a non-linear sensor or some kind of enormous DC offset.

The last issue seems to be that the sensor leaks somehow. It may be that water vapor is permeating the membrane, because when left overnight the sensor dried out. In a humid environment like a rebreather this may not be a issue, but for storage it certainly is. This answers a question I have had for a while- why are rebreather sensors so slow? They are rated to a rise time of 6s to get to 90% of the final value. This is much slower than any of the sensors that I have seen, and does not seem to be an inherent characteristic of the sensor. My suspicion is that a much thicker membrane is used on rebreather sensors to reduce electrolyte water loss.


This sensor seems a lot easier to use, but it seems like a lot of the issues I have noticed may be due to my membrane selection and leaking. I have parts on order for a larger pressure pot (the under $50 cell checker) to see if I can get the larger sensor to behave in a linear way with a polyethylene or FEP (or even teflon tape) membrane. I think this cell checker will be very useful for a number of other things like depth gauges/computers/ingress testing so I am excited to have it on hand. I will have to make an effort to keep my pressure pot electrolyte free this time!

Oxygen Sensor and the Science Jr.

Testing the oxygen sensor will need to be done over a wide range of temperatures and PPO2s, and the cheapest, easiest, safest way of doing this seems to be to not use pure O2 gas. Not only is O2 a somewhat “spicy” gas, but I don’t have a huge tank of it sitting around in my house.

Instead, I intend to increase the PPO2 by increasing the pressure of the air- this will also prove out that the sensor works “at depth”. While that might seem magical, its pretty easy to imagine- as gas density increases at higher pressures, there are just more oxygens per volume bouncing around. The odds that one of these O2s bounces off the sensor go up as the pressure increases, and it is linear with depth.

The Science Junior

With this in mind, I set about what I am calling the Science Junior, since it looks like a generic science widget from KSP. Basically its just a small pressure chamber with a window and some NPT ports, which can be used for various purposes:

  • Pressure port via schrader valve from bike pump
  • Sensor wire pass thru
  • Pressure sensor
  • Gas infeed (?)


For those interested in the construction, the O ring is a just superglued out of cord stock and the sensor wires are run through a 1/8-27 NPT hose barb and epoxied in place.

Designed for a maximum pressure of 150 PSI, the 1/2″ thick polycarb cover and 8x M3 bolts should be more than enough to keep things together. There is about 1lbf per PSI so at 150 lbf I didn’t bother with the math.

One thing I would do differently is to use something removable for the sensor wire infeed, probably something that would get dropped in through the front and get captured by a lip on the inside, as shown in the sketch.

Testing and Learning

For testing, I set the bias voltage and logged the pressure (using a pressure to voltage transducer) and current of the sensor simultaneously while varying the pressure, which controls the PPO2. When the pressure is plotted against the current, it should be roughly linear. The chart above shows the sensor working reasonably well- however there is an odd drop of in current after being pressurized which manifests as a non-linearity in the chart above.

If we turn the scatter plot into a line plot to represent it as a time series, it looks a lot like a typical plot of hysteresis, but that seems like a red herring.

Normalized Pressure and Current for the same test

Looking at the normalized time series of the test, we can see that the sensor seems initially very linear, but then the current drops off after being exposed to pressure.

Here is some data from another test- the same strange trend occurs.

Looking at several sets of data we can see some that are complete garbage (blue data, yellow data, orange data) while some seem highly linear (green, red). Not exactly a good look for a mission-critical sensor that is helping you make life support decisions. Imagine trying to drive the speed limit if your speedo didn’t work!


I strongly suspect that pressure is playing a role here. First, sometimes gas is trapped under the sensor membrane, which causes the sensor to always read high, and to actually drop in current as pressure is applied. This seems to happen as the gas contracts and the membrane basically vaccum seals to the cathode. While oxygen is now coming into contact with the cathode, there is no opportunity for it to interact with the electrolyte, and this causes the current to drop.

Another factor is that in order to get the membrane closer to the cathode, I have had to burp out some electrolyte manually. This probably causes a slight negative pressure on the sensor as the membrane tries to regain its shape- I suspect that this can pull in gas and cause the gas blocking problem.

To solve this, I tried reducing the gap between the top lip of the sensor and the cathode, and making the sensors up “underwater” in electrolyte. This did seem to help with bubble elimination, but I still had a maddening and slow loss of current over time- uA over minutes or hours. these sensors show that good linearity can be achieved, but this drift is unacceptable for rebreather applications.

Additionally, I suspect that the sensor may not be fully watertight. That’s no good, since there needs to be electrolyte in there or it wont work! Some of the slow DC drift that I see could be due to this, or to evaporation through the teflon tape. The volume is <<1ml so even a small amount of evaporation could have an effect. I may try to remedy this or I may try to make a galvanic sensor…it turns out all I need is a little zinc or lead.

To be continued I guess!