Back Original

The ESP32 ADC

A while back, I spotted the XIAO line of ESP32 microcontrollers. For $5, you get a microcontroller, WiFi and/or Bluetooth, a decent number of GPIOs, and LiPo battery management with USB-C charging.1 And the microcontroller is an ESP32, which has first-party support for Rust from Espressif. That sounded pretty good to me– I picked up a few of the ESP32-C3 variant, a small LiPo battery, and some connecting wires.

Watching the battery

While a given battery has a “nominal” voltage, e.g. 3.7V for a LiPo, it doesn’t deliver exactly that voltage. The effective voltage varies (lowers) as the battery discharges.2 For the device(s) I have in mind to build, I want to keep an eye on the battery state from software, so I know when they approach empty and need a recharge.

One thing missing from the XIAO ESP32-C3 board is battery monitoring. There’s not anything built-in to read out the charge level, or voltage.

But! The Seeed Studio wiki has a section on battery usage and monitoring, based on a user’s forum post. With a couple external resistors, we can allocate an analog-to-digital converter (ADC) pin to read out the battery voltage.

Or can we? The guide uses the Arduino environment, but there’s some gaps we need to fill when coming from Rust. Let’s go!

ESP32 in Rust

First, we need to get set up to run code on the ESP32– Rust code specifically, as is my habit these days. This tutorial covers getting set up with the std environment, which runs FreeRTOS under the hood. Once oriented with that, I went through this tutorial to understand the no_std environment better.

I wound up using the no_std environment and the embassy runtime, with tools (espflash) installed as described in those tutorials.

Cargo.toml
[package]
name = "esp-adc-examples"
version = "0.1.0"
edition = "2024"

[dependencies]
esp-backtrace = { version = "0.18.1", features = [
    "esp32c3",
    "panic-handler",
    "println",
]}
esp-bootloader-esp-idf = { version = "0.4.0", features = ["esp32c3"]}
esp-hal = { version = "1", features = [
    "esp32c3",
    "unstable",
] }
esp-println = { version = "0.16.1", features = ["esp32c3", "log-04"] }
esp-rtos = {version = "0.2.0", features=["esp32c3", "embassy"] }
embassy-executor = {version="0.9.1"}
embassy-time = {version="0.5.0"}
main.rs
#![no_std]
#![no_main]

use embassy_time::Duration;
// Use the panic-handler from esp_backtrace:
use esp_backtrace as _;

use embassy_executor::Spawner;
use esp_println::println;
use esp_hal::analog::adc;

esp_bootloader_esp_idf::esp_app_desc!();

#[esp_rtos::main]
async fn main(_spawner: Spawner) {
    esp_println::logger::init_logger_from_env();
    let peripherals = esp_hal::init(esp_hal::Config::default());

    use esp_hal::timer::timg::TimerGroup;
    let timg0 = TimerGroup::new(peripherals.TIMG0);

    use esp_hal::interrupt::software::SoftwareInterruptControl;
    let software_interrupt = SoftwareInterruptControl::new(peripherals.SW_INTERRUPT);

    esp_rtos::start(timg0.timer0, software_interrupt.software_interrupt0);

    // ADC setup code goes here...
    loop {
        // ADC reading code goes here...
        let value = 1;
        println!("ADC reading: {value}",);

        embassy_time::Timer::after(Duration::from_secs(2)).await;
    }
}
.cargo/config.toml

[target.riscv32imc-unknown-none-elf]
runner = "espflash flash --monitor"

[build]
rustflags = [
  "-C", "link-arg=-Tlinkall.x",
  # Required to obtain backtraces (e.g. when using the "esp-backtrace" crate.)
  # NOTE: May negatively impact performance of produced code
  "-C", "force-frame-pointers",
]

target = "riscv32imc-unknown-none-elf"

[unstable]
build-std = ["core"]

# Enable usable backtraces even in release builds.
[profile.release]
debug = true

Hardware setup

The wiki page recommends putting two resistors in series between the battery’s positive and negative terminals, and connecting the node between the resistors to pin GPIO2 / A0.3 The wiki page suggests 220kΩ resistors; I didn’t have those on hand, so I used 470kΩ instead (more on this below).

Why do we have these resistors? A second-hand statement on the wiki page:

The datasheet says nominally 2500mV full scale AD conversion…

but I’ve measured this battery at over 4V when fully charged. So: we need to cut the voltage down into a range the ADC can measure.

These resistors create a voltage divider. Because the resistors form a path between the positive and negative terminals of the battery, some current flows through them. By Ohm’s Law:

$$\begin{aligned} I &= {V_{bat} \over {R_1 + R_2}} \\ &= {3.7\text{V} \over {470\text{kΩ} + 470\text{kΩ}}} \\ &= 3.94 \text{μA} \\ \end{aligned}$$

Across each resistor, there’s a voltage drop proportionate to resistance:

$$\begin{aligned} V_{R_2} &= I \times R_2 \\ &= 3.94 \text{μA} \times 470\text{kΩ} \\ &= 1.85 \text{V} \\ \end{aligned}$$

At a cost of 3.9μA of current, we’ve halved the voltage. Note that while the current (\(I\)) depends on the particular values, the divisor only depends on the fact that \(R_1 = R_2\). That’s why it was fine for me to substitute bigger resistors: the voltage divider only cares that the resistors are equal.4

For a ~4V max battery, dividing by 2 that should put us well within the ADC’s range. So, we should be fine to measure, right? Let’s try it out!

Photo: the schematic described above, laid out on a breadboard. Some of the wires are bent in awkward positions.

Maybe I should stick with the diagrams.

Attenuation

I started with an examples based on this “snake” program:

// ADC setup code:
let mut config = adc::AdcConfig::new();
let mut pin = config.enable_pin(peripherals.GPIO2, adc::Attenuation::_0dB);
let mut adc = adc::Adc::new(peripherals.ADC1, config).into_async();

loop {
    // ADC reading code:

    const SAMPLES: usize = 100;
    // ADC produces 12-bit values, we can store in a u16
    let mut samples = [0u16; SAMPLES];
    for v in &mut samples {
        *v = adc.read_oneshot(&mut pin).await;
    }

    let total: usize = samples.iter().map(|v| *v as usize).sum();
    let value = total / SAMPLES;
    println!("ADC reading: {value}");

    embassy_time::Timer::after(Duration::from_secs(2)).await;
}

This kinda worked, but I was only getting a binary value: either 4095, or something much lower, depending on whether I had connected the battery or grounded the pin.

I recognized the 4095 value as “off the charts high”. The ADC has 12 bits of resolution, and \(2^{12} - 1 = 4095\); we’re saturating the ADC.

All this was running through the external voltage divider; it should be within the 2500mV range. Why was the ADC saturated?

After some flailing, I went back to the datasheet, where the 2500mV range was supposedly sourced from. Section 5.5 “ADC characteristics” does indeed state a range of 0~2500… for ATTEN3. What’s that?

The ESP32 ADC has an internal attenuator, i.e. a configurable voltage divider– like the one we made, but inside the chip, and configurable to different ratios. The options are framed as decibel levels in the esp_hal::analog::adc::Attenuation type:5

Attentuation level Gain Multiplier Approximate max
ATTEN0 0dB 1 750mV
ATTEN1 -2.5dB 0.562 1050 mV
ATTEN2 -6dB 0.251 1300 mV
ATTEN3 -11 dB 0.079 2500 mV

The wiki’s example (apparently) assumes the use of 11dB attenuation, giving the widest range. That makes sense as a default for, say, the Arduino environment: there are a lot of 5V signals, so a simple external divide-by-2 can make that range useful.

I’m not sure what typical levels the other attenuation levels are “good” for. For instance, 3.3V divided-by-2 is still more than the ATTEN2 range. If you know why these target ranges show up, drop me a line!

If we want to stick with the “even” voltage divider, we’ll want to use the maximum attenuation going forward:

// ADC setup code:
let mut config = adc::AdcConfig::new();
let mut pin = config.enable_pin(peripherals.GPIO2, adc::Attenuation::_11dB); // changed!
let mut adc = adc::Adc::new(peripherals.ADC1, config).into_async();

But that didn’t cut it: I still got a saturated value of 4095. To understand why, we need to look beyond the electrical model to see what the ADC is actually measuring, and how to calibrate our measurements to reality.

External calibration

Let’s start by reviewing the external voltage divider. Is it actually “divide by 2”?

$$V_{adc} = V_{bat} \times {R_2 \over {R_1 + R_2}}$$

In the case of \(R_1 = R_2\), that’s the same as dividing by 2.

We could create a different ratio by having different resistances on either side of the ADC. For instance, if we wanted to divide by 4, we could chain three resistors for \(R_1\) and use just one for \(R_2\). That would give us a ~10V ADC range; we could measure the charge of a 9V battery!

Even if we’re using “the same resistor value” for \(R_1\) and \(R_2\), the actual resistors on the breadboard are slightly different. The resistors are only nominally 470kΩ; the ones I’m using have tolerances ±5%, so 447kΩ-494kΩ.6 As a result, the external voltage divider divides by about 2, but it will vary depending on the specific parts we have on hand.

With multimeter in hand, I took some measurements:

$$\begin{aligned} R_1 &= 477.2 \text{kΩ} \\ R_2 &= 475.5 \text{kΩ} \end{aligned}$$

Both are within 2% of their nominal resistances. They’re both skewed in the same direction, which is good for calibration: as we approach \(R_1 = R_2\), we get close to a divisor of 2, regardless of the absolute values.4 So with these components, the actual ratio we’re getting is:

$$ {R_2 \over {R_1 + R_2}} = {475.5 \text{kΩ} \over {475.5 \text{kΩ}+ 477.2 \text{kΩ}}} = 0.499$$

<1% off the expected value? That’s fine…for the external voltage divider.

What about the internal voltage divider– the attenuator?

Internal calibration

There’s process variation in manufacturing discrete resistors, and likewise in creating integrated circuits (“chips”) like the ESP32. Each individual chip will have slightly different electrical properties: changes in internal resistance, changes in the ADC’s reference voltage… which can result in quite different readings!

Unlike with my breadboard, we can’t poke a multimeter inside a chip to measure these different values. Instead, the manufacturers send each chip through a calibration process before selling it, and provide the calibration information for that chip.

As far as I can tell, the ESP32C3 is calibrated by:

We can read these calibration values out with a little bit of code. We don’t need to read them out manually, but it’s interesting to see what they are!

main.rs, to read attenuation values
#![no_std]
#![no_main]
use esp_backtrace as _;
use esp_hal::analog::adc::Attenuation;
use esp_println::println;

esp_bootloader_esp_idf::esp_app_desc!();

#[esp_rtos::main]
async fn main(_spawner: embassy_executor::Spawner) {
    esp_println::logger::init_logger_from_env();
    let peripherals = esp_hal::init(esp_hal::Config::default());
    use esp_hal::timer::timg::TimerGroup;
    let timg0 = TimerGroup::new(peripherals.TIMG0);
    use esp_hal::interrupt::software::SoftwareInterruptControl;
    let software_interrupt = SoftwareInterruptControl::new(peripherals.SW_INTERRUPT);
    esp_rtos::start(timg0.timer0, software_interrupt.software_interrupt0);

    use esp_hal::efuse::{AdcCalibUnit, Efuse};
    for atten in [
        Attenuation::_0dB,
        Attenuation::_2p5dB,
        Attenuation::_6dB,
        Attenuation::_11dB,
    ] {
        // The "zero" value: what the ADC reads when the input is at 0mV.
        // Internal voltages, impedences, etc. in the chip mean this is probably a non-zero value!
        let init = Efuse::rtc_calib_init_code(AdcCalibUnit::ADC1, atten);

        // A nonzero voltage applied to the input during calibration, in millivolts.
        let mv = Efuse::rtc_calib_cal_mv(AdcCalibUnit::ADC1, atten);
        // What the ADC read when the 'mv' voltage was applied.
        let cal = Efuse::rtc_calib_cal_code(AdcCalibUnit::ADC1, atten);

        println!("attenuation: {:?}", atten);
        println!("rtc_calib_init_code: {:?}", init);
        println!("rtc_calib_cal_mv: {}", mv);
        println!("rtc_calib_cal_code: {:?}", cal);
    }
}

For the chip on my breadboard, I get:

Attentuation init x cal
0dB 1364 400 mV 1958
2.5dB 1512 550 mV 2013
6dB 1539 750 mV 1969
11dB 1666 1370 mV 1931

Look at those high init values! At a glance, it looks like a quarter of the ADC range is unusable! Let’s not panic, though; let’s see how to use these values.

The esp_hal crate offers various calibration methods that handle reading the efuse and passing the values through the appropriate formulas7 before returning values. The three mechanisms stack: AdcCalCurve incorporates AdcCalLine, which incorporates AdcCalBasic, so we can just use AdcCalCurve and get “the best of all worlds”.

The AdcCalBasic mechanism has this comment:

Basic calibration sets the initial ADC bias value so that a zero voltage gives a reading of zero. The correct bias value is usually stored in efuse..

Failing to apply basic calibration can substantially reduce the ADC’s output range because bias correction is done before the ADC’s output is truncated to 12 bits.

Aha! This explains why our uncalibrated values were (still) saturated: they weren’t getting the bias subtracted out, so they saturated the 12 bits available. Let’s see what we get if we ask for calibrated values:


use esp_hal::peripherals::ADC1;
let atten = adc::Attenuation::_11dB;
let mut config = adc::AdcConfig::new();

// NEW: enable_pin_with_cal, instead of just enable_pin
let mut pin = config.enable_pin_with_cal::<_, adc::AdcCalCurve<ADC1>>(peripherals.GPIO2, atten);

let mut adc = adc::Adc::new(peripherals.ADC1, config).into_async();

loop {
    // ADC reading code:
    let v_adc = adc.read_oneshot(&mut pin).await;

    println!("Internally-calibrated ADC reading: {v_adc}");

    embassy_time::Timer::after(Duration::from_secs(2)).await;
}

In this setup, I got these readings:

The ADC is <1% off of the voltmeter; not bad! The datasheet says the ATTEN3 range is ±35mV; 18mV is well within that.

This gives us a calibrated value of “what’s on the ADC pin”. To map that back to the battery, we’ll also need to use the values from our external voltage divider. Note how I’m now using the measured values instead of the nominal ones–sticking with “calibrated”!



const R1: usize = 477_200; // ohms
const R2: usize = 475_500; // ohms
loop {
    // ADC reading code:
    let v_adc = adc.read_oneshot(&mut pin).await;

    println!("Internally-calibrated ADC reading: {v_adc}");
    let v_bat = (v_adc as usize) * (R1 + R2) / R2;
    println!("Externally-calibrated battery reading: {v_bat}");

    embassy_time::Timer::after(Duration::from_secs(2)).await;
}

As measured:

A little further off (~3%). Probably my measurements of the resistors have some noise, and I’m not accounting for all the resistances on the breadboard. Still, this should be good enough for me to track “how close is the battery to empty”. Success!

As always, drop me a line if this helped you, or if you have corrections or comments!