This post was originally published on this site

EDNEDNPrototype boards ease cordless power tool designEMI Q&A: Reduce on-board DC-DC converter EMI for wireless/IoT devicesDigital camera design, part 5: Basic noise considerations for CMOS image sensorsCalculate octaves and decades for numerical value ratiosMulti-disciplinary integration is a design realityNAS successors add notable featuresWhy and when software testing matters in embedded systemsHow image acquisition works in iris-recognition applicationsOscilloscope probes employ optical isolationConverters tout ferrite-bead compensation

https://www.edn.com Voice of the Engineer Fri, 11 Dec 2020 16:25:30 +0000 en-US hourly 1 https://wordpress.org/?v=5.6 https://www.edn.com/wp-content/uploads/2019/12/EDN-favicon-1.ico https://www.edn.com 32 32 158134410 https://www.edn.com/prototype-boards-ease-cordless-power-tool-design/ https://www.edn.com/prototype-boards-ease-cordless-power-tool-design/#respond

Fri, 11 Dec 2020 16:25:30 +0000

Plug-and-play boards aid the development of battery-operated applications that use 3-phase brushless motors.

The post Prototype boards ease cordless power tool design appeared first on EDN.


Two plug-and-play boards from STMicroelectronics aid the development of applications that use 3-phase brushless motors powered by Li-ion batteries that supply up to 56V, such as home and garden power tools. Both evaluation boards provide a potentiometer for speed variation, inputs for trigger and rotation-direction setting, thermal shutdown, and protection against reverse biasing of power-stage outputs.

STMicroelectronics PR image of the STEVAL-PTOOL board

Aimed at equipment powered by Li-ion battery packs from two cells (7.4V) to six cells (22.2V), the STEVAL-PTOOL1V1 has a 70×30-mm footprint and delivers up to 15 A of continuous current. It employs the STSPIN32F0B motor controller with embedded microcontroller, 3-phase half-bridge gate driver, 12-V and 3.3-V regulators, and op amp for current sensing. The board’s power stage is based on the STL180N6F7 60-V N-channel power MOSFET.

The STEVAL-PTOOL2V1 is for battery sizes ranging from 8 cells (29.6V) to 15 cells (55.5V) and supplies up to 19 A of continuous current. It is outfitted with the STSPIN32F0252 motor controller, which includes a 3-phase 250-V gate driver, microcontroller, and comparator. The motor controller’s output pins resist below-ground spikes down to -120V for enhanced reliability. In addition, the 77×54-mm board packs an 80-V N-channel power MOSFET.

The STEVAL-PTOOL1V1 and STEVAL-PTOOL2V1 evaluation boards cost $41 and $69 each, respectively. Both boards are available now.

STEVAL-PTOOL1V1 product page

STEVAL-PTOOL2V1 product page

STMicroelectronics, www.st.com

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

Related articles:


Small Bottom Ad




The post Prototype boards ease cordless power tool design appeared first on EDN.

https://www.edn.com/prototype-boards-ease-cordless-power-tool-design/feed/ 0 4473211 https://www.edn.com/emi-qa-reduce-on-board-dc-dc-converter-emi-for-wireless-iot-devices/ https://www.edn.com/emi-qa-reduce-on-board-dc-dc-converter-emi-for-wireless-iot-devices/#respond

Thu, 10 Dec 2020 18:11:49 +0000

EMI from DC-DC converters has long plagued designers of wireless and IoT devices; here are answers to pressing questions on PCB design to reduce it.

The post EMI Q&A: Reduce on-board DC-DC converter EMI for wireless/IoT devices appeared first on EDN.


Self-generated EMI from DC-DC converters has long plagued designers of wireless and IoT devices. The broadband harmonic content often extends up through 1.5 GHz, which includes most wireless protocols, cellular LTE, and GPS/GNSS bands.

I’ve written several articles and presented webinars on how to reduce self-generated EMI for wireless and IoT devices and one of the key methods to resolve self-generated EMI is through proper PCB design. Some articles are mentioned in the references below, and I recently presented a lengthy webinar on the subject. If you missed the presentation, the recorded version is located here. The webinar elicited several questions on PCB design and reducing EMI from DC-DC converters and you’ll find my answers below.

Q: When is it OK to reference the power plane with a circuit trace?

a diagram of a poor four-layer PCB stack upFigure 1 An example of the usual four-layer stack-up that can have very poor EMI.

This is a common question and arises from the use of the typical four- and six-layer board designs where power and ground return planes are usually quite separated (Figure 1). If you understand that high frequency (>100 kHz) signals are really electromagnetic waves, whose return currents are typically referenced to digital ground return, then you’ll better understand why referencing to the power plane is a bad idea. Those return currents need to find a way back “somehow” to digital return and the path they take may create EMI. In my opinion, non-critical signals (low frequency, control signals, etc.) may be referenced to power, if and only if, the power and return planes are very closely coupled and well bypassed with decoupling capacitors. This is NOT generally the case for the typical four- and six-layer board stack-ups. In most cases, running high frequency digital signals referenced to the power plane is HIGH RISK for EMI. I would suggest referring to my four-part series on designing boards for low EMI.

Q: Would “ground pours” help isolate noisy signals?

The very best way to isolate “noisy” signals is through proper PCB stack-up; that is all high frequency (>100 kHz) digital signals’ traces should be adjacent to a solid return plane. This will bound the electromagnetic wave. Breaks in the return plane can cause an increase of 15 to 20 dB in EMI (see my video demo on the web site in the References). According to Dr. Eric Bogatin, ground pours more often don’t actually help and can also be detrimental, depending on the board design, because they can appear as “breaks” in the return plane in some cases. I’d refer you to his web site for more information on PCB design and the topic of ground pours.

Q: When running a clock trace from the top to the bottom of a board, how important is it to add vias nearby for the return current?

It depends; the usual answer to many EMC questions! If the power and return planes are located close together (2-3 mils, max) and there are adequate decoupling capacitors located around the board, then it’s not as important to add a nearby via for the return current path. However, for critical traces like clocks, I’d add one or more in order to ensure a tight bound for the electromagnetic wave. I’d again refer you to my series on PCB design for low EMI.

Q: What effect does rise and fall times have on EMI and what sort of percentage should rise and fall be of pulse width?

Dr. Eric Bogatin has some excellent discussion on this topic in his book, Signal and Power Integrity Simplified, 3rd edition (see the recommended book list below). Briefly, you can use the equation BW = 0.35/RT, where BW (bandwidth) is in GHz and RT (rise time at 10-90%) is in ns. So, for a rise time of 1 ns, the bandwidth is about 0.35 * 1 GHz, or 350 MHz. The pulse width affects the amplitude of the harmonics. As it decreases, the overall amplitude decreases, as well. As the pulse width decreases, there will be a point where the rise and fall times will start to appear as a rounded pulse (given fixed RT/FT), so there’s a point where the nice square pulse shape starts to fall apart. I’m not sure of any general rule for percentage RT versus pulse width.

Q: Electrons only travel at 1 cm per second?

This question has to do with my explanation on how digital signals propagate in PCBs. Most of us were taught (or at least it was implied) that signals were really electron flow in copper wires or traces and that the electrons moved at near light speed. While this is true for DC circuits, electrons DO NOT travel at near light speed, because they are too tightly bound within the copper molecules. At high frequencies (>100 kHz), digital signals are really electromagnetic waves propagating through the dielectric layer between the copper trace and return plane. Between DC and 100 kHz, there’s a transition region where signals convert from pure DC currents to electromagnetic waves.

diagram of electromagnetic wave propagationFigure 2 A cross-section of a microstrip over ground return plane is a physical depiction of a digital signal in the form of an electromagnetic wave traveling within the dielectric space between the trace and return plane.

This electromagnetic propagation model is comprised of two elements; the propagating wave itself, which travels at about half light speed in the dielectric (assuming FR4 dielectric) and a combination of conduction current, which IS electron flow in copper molecules, and displacement current (“through” the dielectric) (Figure 2). This conduction current is what you’d measure with an ammeter, but the electrons are only traveling about 1 cm/sec. I’ve found this physical model of digital signal propagation is not generally taught in most fields and waves textbooks. However, there are two references I’d recommend: Signal and Power Integrity Simplified, 3rd edition, by Dr. Eric Bogatin (pages 245 to 252) and Electromagnetics Explained – A Handbook for Wireless/RF, EMC, and High-Speed Electronics, by Ron Schmitt (pages 33-34, 84-86 and 96-98). Also see my series on PCB design for low EMI.

Q: Are the power modules having integrated inductors better for low EMI?

Yes, because the input and output loop areas are minimized. An example is Linear Technology’s “μModule” system on a chip (SoC). See Figure 3 and the Analog Devices page on μModule buck-boost regulators.

annotated diagram of a Linear Technology umoduleFigure 3 This example of a DC-DC converter from Linear Technology shows the integrated inductor (or transformer in this case), Cin and Cout all integrated into an SoC. This design minimizes the noisy current loops, reducing EMI. Source: Linear Technology

Q: Do we need cut outs down to the bottom underneath the switch node plane to reduce electric field coupling?

That’s an excellent question! Obviously, we want to minimize the trace area of the switch node (SW) trace to the inductor to reduce the coupling to this point that can be switching up to 42-V square waves in this example and can produce intense E-fields (Figure 4).

diagram of the Linear Technology LT8648S DC-DC buck converterFigure 4 Here is a typical DC-DC buck converter, showing the switch node (SW) and output inductor. The debate centers around whether to cut away the return plane either around the SW node or inductor, or both. Source: Linear Technology

Several years ago, I felt cutting away the return plane in the area of the switch node (SW) was important for reducing capacitive coupling until I really started studying how digital (or power switching, in this case) worked from a physics point of view. While I now believe strongly that the return plane should be maintained as a solid plane under all portions of DC-DC converters, your argument cannot be discounted completely and may depend on the exact situation.

Well-known experts in EMC and PCB design (Dr. Todd Hubing, Rick Hartley, and Daniel Beeker) maintain the return plane should be solid. On the other hand, SI and PDN experts I know (Steve Sandler, for one) are thinking along your lines. Currently, I’ve initiated a study amongst myself, Steve Sandler, and Todd Hubing where we’ll investigate this question. Steve has agreed to build several circuit boards and test for signal and power integrity and I’ll be measuring the radiated and conducted emissions. It should be interesting and may end up as a technical paper. Currently, my opinion on a solid return plane stands until proven otherwise.

Q: With the absorber material we see that EMI is attenuated. But isn’t the stuff then put somewhere else – unpredictable – inside the circuits instead of leaving to the outside?

The radiated emissions from ICs or circuit traces actually get absorbed and converted to heat in the lossy ferrite material.

Q: Are series ferrite beads on DC-DC converter inputs and outputs a good idea?

Having come from an RF design background, this was pretty common practice for RF circuits – and I still believe that technique may be used successfully. In recent years, as I’ve studied power integrity, I’ve come to change my mind. For good power distribution network (PDN) performance, you don’t want any series impedances in the PDN. This was illustrated clearly by the late Steve Weir in his PowerCon presentation, as well as recent textbooks by Dr. Eric Bogatin and Larry Smith in their book, Principles of Power Integrity for PDN Design Simplified. If you do choose to try these in input or output filters, be sure to add an extra bulk capacitor (4.7 to 27 μF ceramic) between the ferrite bead and digital switching converter IC. I still don’t recommend adding these.

Q: Should DC-DC converters be placed on the bottom-side of the PCB and sensitive analog circuits on the top-side?

Yes, that’s a great idea and one that some of my clients have used successfully. Typically, the RF section is built on the top layer and all the digital processing and control are located on the bottom layer. It’s very important to have at least one solid ground return plane in the middle and you need to be careful how any critical (that is high-frequency) signals are routed between top and bottom. It’s important to ensure a continuous path for return currents along with the signal via.

Q: Example of excellent DC-DC converter PCB design?

All I can suggest at this time is to reduce the loop areas of both the Cin and Cout (plus switching inductor) by locating these components very close to the DC-DC converter IC and to maintain isolation between the input circuit and output circuits. Locate all the associated components on either the top or bottom side of the board and ensure a solid return plane adjacent.

Q: You referred to sharing Cin and Cout ground pin. Can you revisit this topic again?

When Cin (the noisy loop for buck converters) and Cout (the noisy loop for boost converters) share the same current return path to ground, noise can couple via that common impedance return path and contaminate the “quiet” side (of whichever buck/boost topology is being used). Figure 5 shows a good example where Cin and Cout are connected to the same point. Note that it’s not just TI that suggest these inadvertently poor layout recommendations, but ALL device manufacturers do at times. You need to be able to trace out the main current loops and ensure primary and secondary circuits are separated well apart from each other.

Texas Instruments layout showing a shared return pathFigure 5 This example of poor circuit layout for TI’s LMR33630 shows Cin and Cout sharing the same ground return path. This common impedance coupling will couple noise currents of this buck converter to the output voltage rail. Source: Texas Instruments

Q: What is the optimal way to isolate Cin and Cout ground references on a DC-DC converter?

This is related to the question above. The best way is through separation. If you were to lay out the circuit board according to the schematic (input loop – converter IC – output loop) you’d be in good shape.

Q: You haven’t mentioned about CM and DM emissions. Are there cases where PCB radiation reduction for DM sometimes can cause increased CM and vice versa? Is there a universal PCB radiation reduction technique which can reduce both types of emissions simultaneously?

The very best way to reduce BOTH CM and DM EMI is through proper stack-up of your PCB. All signal traces should have an adjacent ground return plane and all power planes/traces should also have an adjacent ground return plane. We want to confine the digital signal electromagnetic waves between the copper trace and return plane from start to end. We want to confine any power network transients (also electromagnetic waves) between copper planes/traces and return plane as well. I did some recent experiments on DM and CM conducted emissions in my article on LISN Mate.

Q: You mentioned to keep DC-DC converters step away from processors and other digital circuits. However, low voltages rails (1V5, 0V8, etc.) need to be closer to the digital sinks, with penalty of voltage drops compromise voltage levels. For this case do you have any specific tip?

Figure 6 Sometimes it makes better sense to locate DC-DC converters near what they are powering. Just make sure to follow all the usual precautions, such as keeping current loops minimized and ensuring a solid return plane underneath.

Laying out PCBs while maintaining the goal of partitioning is always a tradeoff (Figure 6). Yes, sometimes (more often than not?) DC-DC converter circuits need to be located within the digital processing area. I would just caution you to maintain the general rules for DC-DC converter layout and ensure an adjacent solid return plane underneath all the digital and power conversion circuitry.

I’d also avoid locating power conversion too close to RF sections of the system. Some wireless module manufacturers suggest locating power conversion circuits near their modules and I’ve seen real problems with client designs that do this. There’s generally large E-fields generated around DC-DC converter inductors and switch nodes. Locating these fields near antennas is really bad news.

In addition, I’d highly recommend planning for the use of local shields over power conversion and digital processing sections. If not needed, then fine, but know that local shields are usually necessary (especially for physically small boards) and very hard to implement if there’s no attachment points planned in advance.

Q: Which is most effective, EMI filters (reflective LC filters) or EMI absorbers?

Ha! Well, my guess would be conventional filters would be better, because they can attenuate down to 40 dB, or so, however, their bandwidths would be potentially narrower than the more broadband ferrite absorber. Flexible ferrite absorber sheets (figures 7 and 8), on the other hand, are generally good for only 5 to 20 dB absorption. I suppose some experiments are called for. I’d refer you to my article on ferrite absorbers.

Figure 7 Measure ferrite absorber sheets using the microstrip attenuation method.

Figure 8 Here is an example absorption plot of the Arc-Tech WaveX ferrite absorber, which happens to work nicely in the normal cellular LTE and other wireless/GPS bands below 2 GHz.

Q: Which EMC book would you recommend?

I mentioned several above, but these favorites from my own library come immediately to mind (in no particular order):

  • Henry Ott, Electromagnetic Compatibility Engineering, 2nd edition: a more practical treatment and probably the best-known reference
  • Clayton Paul, Introduction to Electromagnetic Compatibility, 2nd edition: a more academic treatment
  • Eric Bogatin, Signal and Power Integrity Simplified, 3rd edition
  • Smith and Bogatin, Principles of Power Integrity for PDN Design
  • Steven Sandler, Power Integrity – Measuring, Optimizing, and Troubleshooting Power Related Parameters in Electronic Systems
  • Ralph Morrison, Grounding and Shielding – Circuits and Interference, 6th edition
  • Ralph Morrison, Fast Circuit Boards – stresses the electromagnetic wave nature of digital signals
  • David Weston, Electromagnetic Compatibility, 3rd edition: more oriented toward military systems
  • André and Wyatt, EMI Troubleshooting Cookbook for Product Designers: good coverage of EMC basic theory, measurement techniques, and troubleshooting
  • Wyatt, Creating Your Own EMC Troubleshooting Kit, Volume 1 – Volume 2 (emissions) and Volume 3 (immunity) coming soon
  • Würth Elektronik’s Trilogy of Magnetics, 5th edition

Kenneth Wyatt is president and principal consultant of Wyatt Technical Services.


  1. Wyatt, Characterize DC-DC converter EMI with near field probes, EDN
  2. Wyatt, Design PCBs for EMI: How signals move – Part 1, EDN
  3. Wyatt, Platform interference, EDN
  4. Wyatt, Insertion loss measurements of ferrite absorber sheets, EDN
  5. Various video demos of EMC design principles, Wyatt Technical Services
  6. Wyatt, Review: Tekbox LISN Mate is valuable for evaluating filter circuits, EDN






Small Bottom Ad





The post EMI Q&A: Reduce on-board DC-DC converter EMI for wireless/IoT devices appeared first on EDN.

https://www.edn.com/emi-qa-reduce-on-board-dc-dc-converter-emi-for-wireless-iot-devices/feed/ 0 4473144 https://www.edn.com/digital-camera-design-part-5-basic-noise-considerations-for-cmos-image-sensors/ https://www.edn.com/digital-camera-design-part-5-basic-noise-considerations-for-cmos-image-sensors/#respond

Wed, 09 Dec 2020 13:35:07 +0000

Here is how noise sources can be either eliminated or made insignificant in CMOS image sensor designs.

The post Digital camera design, part 5: Basic noise considerations for CMOS image sensors appeared first on EDN.


The part 4 of this article series looked at the operation of the 3T and 5T charge-transfer pixels in some detail. The characteristics of the pixel were examined during reset and charge integration. We saw how the rolling shutter functions, why the start and stop times of each line are time-offset, and how the reset reference used for a given exposure is a measurement of the reset level for the next exposure versus the one at hand. We also saw how the reset voltage level can be affected by prior exposure, leading to image lag, and how altering the operating voltage for the reset control can improve the situation.

Next, the article showed how the basic 5T charge-transfer pixel can resolve the reset reference level issue by using a method to separate charge integration from charge sensing functions in the pixel. Finally, we saw that the charge-transfer pixel can operate in both rolling shutter and global snap shutter modes, leading to a way to solve the focal plane distortion problem suffered by the rolling shutter operating mode when motion is present in the scene. We also noted that the dynamic charge storage used in the charge-transfer pixel can result in degraded images caused by increased noise due to dark signal.

This article will look at the basics of noise in digital camera designs.

Photon statistics

The basis for creating an electronic image from a CMOS image sensor is the photoelectric effect, discovered by Einstein and the subject of his 1921 Nobel Prize in physics. Photons of sufficient energy interact with silicon creating hole-electron pairs, which are charged particles. Because they are charged, h-e pairs can be manipulated, moved and collected by electric fields, so they can be measured as part of making an electronic image.

Photons follow Poisson statistics: there will be an average number of photons collected during any given period of time, but the actual number will vary due to the discrete nature of the source. It’s the source of photon shot noise, leading to measurement uncertainty that arises from the discrete nature of photons.

From a numerical perspective, this shot noise is equal to the square root of the number of photons interacting with the silicon. For the visible light range, each interacting photon creates a single h-e pair. Therefore, the shot noise for an electron-operated visible light device is:

shot noise (e-) = SQRT (signal(e-))

The shot noise is the minimum possible noise in a single electronic image; it represents the noise floor.

If shot noise versus signal is plotted on log-log axes, a straight-line will emerge and will have a slope of +1/2, corresponding to the square root relationship between the noise and signal (Figure 1).

Figure 1 The noise vs. signal plot allows designers to graphically determine four distinct aspects of pixel operations. Source: Etron

System noise

With no signal and exposures of zero-length, noise can still be measured in electronic images. While there are a number of contributing components to this noise, it can be collectively called read noise. Contributing to the read noise are 1/F noise and random telegraph signal noise in the source follower amplifiers. Another component is reset noise. Reset noise is the noise associated with the signal node not resetting to the same voltage during each reset operation, as mentioned in part 4 of this article series.

Using correlated double sampling (CDS), reset noise can be effectively removed from an image. CDS uses an amplifier along with sample-and-hold circuits that effectively sample the reset level and then the integrated signal level and differences of one from the other. The resulting signal has the reset noise removed by this method. As shown in part 4, the 3T pixel cannot remove the reset noise at the pixel sensing level. However, the 5T charge-transfer pixel used in rolling shutter mode can remove the reset noise at the pixel sensing level using this differencing combined with sample-and-hold amplifiers.

Dark signal noise

Dark signal is a strong function of temperature, and for a given temperature, it accumulates at a steady rate. For example, for a constant temperature, a doubling of exposure time will result in a doubling of dark signal. For a constant exposure time, approximately every 5-6 °C change makes a 2x factor impact on dark signal.

From a noise perspective, the noise arising from the dark signal has two different components: dark shot noise and dark fixed pattern noise. Like the shot noise associated with light, the dark shot noise is mathematically equal to the square root of the number of thermally-generated electrons within the integration period:

dark shot noise (e-) = SQRT (dark signal(e-))

Dark fixed pattern noise (DFPN) is caused by the non-uniform distribution of the dark leakage current, as shown in part 3 of this article series. Mathematically, DFPN is proportional to exposure time:

              DFPN = DSNU * dark signal (e-)

As long as nothing saturates, a doubling of exposure time causes a doubling of the DFPN. For a given exposure time and temperature, this dark fixed pattern is unchanged from frame to frame and can be removed from image frames by “dark subtraction” or “despiking”. An example was shown in the part 3 article.

Dark signal non-uniformity (DSNU) is determined empirically.

Dark shot noise cannot be removed from the image. If practical to cool the sensor, then the dark signal can be made arbitrarily small by cooling. It can add a significant amount of complexity, weight and cost and sharply increase power dissipation, so it’s not practical for many applications.

Fixed pattern noise

If the camera photographs a uniformly illuminated featureless target, then the resulting image should have no discernable features. Any deviation from this ideal case is caused by fixed pattern noise (FPN).

Typically, FPN has two components: optical non-uniformities associated with delivery of focused light to the image sensor with a wide field of view and pixel-to-pixel variation of the photo response of the image sensor. The image sensor manufacturer may specify a pixel level photo response non-uniformity (PRNU) and the optical contributions may be mathematically modeled or empirically determined.

Combined effect of noise components

Mathematically, the combined effect of these uncorrelated noise components is expressed as the square root of the sum of the squares of the individual components.

Total noise = SQRT (system_noise^2 + shot_noise^2 + dark_shot_noise^2 + FPN^2 + DFPN^2)

It’s worth noting that only the system noise is signal level independent. The other terms have either an exposure or time dependency. For instance, shot noise and FPN are both functions of signal charge arising from exposure. Likewise, dark shot noise and DFPN are both functions of dark signal charge, which is dependent on time and temperature.

As a result, it’s possible to plot noise versus signal, as shown in Figure 1, and graphically identify four distinct regimes of operation:

  1. Read noise limited
  2. Shot noise limited
  3. Fixed-pattern noise limited
  4. Saturation (full well)

The delineation of each regime is indicated by a change in the slope of the noise curve when plotted using logarithmic axes. The read noise limited regime has a slope of zero. The shot noise limited regime has a slope of +1/2, indicating the square root relationship between the noise and the signal. The fixed-pattern noise limited regime has a slope of +1 and the saturation regime is indicated when the noise rolls off as it begins to saturate the pixels.

From analysis of the same graph, one can graphically determine a number of critical performance parameters such as:

  1. Read system noise (DN or e-)
  2. Saturation level (DN or e-)
  3. Photo response non-uniformity or PRNU (%)
  4. Dark signal non-uniformity or DSNU (%)
  5. Camera gain constant (e-/DN)
  6. Camera gain linearity (%)

Figure 2 Photon transfer analysis enables graphical measurement of camera gain, photo response non-uniformity, read noise, and saturation level. Source: Etron

This graphical method is called photon transfer analysis and will be discussed in more detail in the next article in this series (Figure 2).

Noise minimization

The following actions can be taken to minimize the noise:

  1. Dark noise components can be reduced by:
    1. Reducing exposure time
    2. Reducing operating temperature of sensor
  2. Dark fixed pattern noise for non-saturated pixels can be removed by dark subtraction a.k.a. despiking. It involves subtracting a dark frame from the image frame, pixel by pixel. An example of despiking has been shown in part 3 of this article series.
  3. Fixed pattern noise can be removed via a process called flat fielding. The process involves dividing the image frame by a pixel calibration image frame on a pixel-by-pixel basis. The calibration frame is simply a high SNR image of a uniformly illuminated featureless background taken using a focused optical system.

Shot noise and read noise are fundamental limits

The only noise components that cannot be removed from an image with non-saturated pixels are the read noise, the image shot noise and the dark shot noise. If it is feasible to cool the sensor, the dark shot noise can be reduced to arbitrarily low levels so as to be a non-factor.

Whether dark subtraction and flat fielding are practical and beneficial depends on the application. For example, a high frame rate video camera may not have much in the way of a dark signal; not much dark signal charge can accumulate in one frame in 1/60 of a second. On the other hand, for a still image of 30 minutes exposure time, a cooled sensor will likely be required.

Fast wide-angle lenses typically introduce significant light intensity variation from center of the field of view to the edges. This variation is fixed from frame to frame as part of a fixed pattern and is proportional to the average intensity of the image. Flat fielding can remove this fixed pattern but may be impractical to apply to a video stream in real-time because the computational bandwidth may exceed the system capabilities in a low-cost consumer product. On the other hand, it may be a small matter to apply to a high-resolution still image.

The sensor’s characteristics and the design of the camera using it will establish the system and read noise characteristics. Contributing factors include sensor design and fabrication technology as well as camera design parameters such as power supply noise and decoupling and PCB signal routing and shielding, particularly the shielding and isolation of digital circuits from the small-signal sensitive analog circuits.

Image sensor’s noise components

The image sensor design and wafer fabrication technology have an enormous impact on the system noise. This system noise can be decomposed into three major components: amplifier noise, reset noise, and column offset noise. Like other uncorrelated noise sources, the combined effect of these sources is again the square root of the sum of the squares:

Sensor system noise = SQRT (reset_noise^2 + column_offset_noise^2 + amplifier_noise^2)

As discussed in part 4 article, the pixel design can affect the reset noise. A charge-transfer pixel can be used in a correlated double sampling scheme to eliminate reset noise present on-chip in the analog processing domain.

Because each column has its own amplifier, the zero-level signal will vary column to column. This column-to-column variation is called column offset noise and often can dominate the sensor’s system noise. Fortunately, it can be removed by subtracting a zero-length exposure from the image.

Image sensor’s fundamental noise floor

The only on-chip noise source that is fundamental and cannot be removed is the source follower amplifier noise. A graph showing the relative magnitudes of these noise sources is shown in Figure 3.

Figure 3 The CMOS offset, reset, and source follower noise are fundamental on-chip noise sources. Source: Etron

In Figure 3, an Iron 55 soft x-ray source has been used to irradiate the sensor. The energy level of the x-ray liberates 1,620 electrons per interacting x-ray. Among other uses, it provides a convenient way to calibrate the digital number (DN) to the electron count; each recorded ‘hit’ is 1,620 electrons. The peaks at 6,480 DN correspond to 1620e- resulting in a calculated camera gain of 0.25 e-/DN.

As Figure 3 reveals, the magnitude of the offset and reset noise versus the “noise floor” represented by the source follower is significant. So, there’s high value in eliminating these noise components if image signal-to-noise ratio (SNR) with low signal intensity is important in your application.

Figure 4 Faint star images obscured by read noise (top) are revealed by combining 64 images (bottom). Source: Etron

An example showing how a faint signal can be “buried” in noise is shown in Figure 4. In this case, by combining 64 images, a factor of 8 reduction of effective read noise is attained by reducing the 7.6e- noise of a single image to 0.97e- in the combined image. More will be explained in future articles about noise calculations and strategies for optimizing signal-to-noise ratio.

Richard Crisp is VP of New Product Development for Etron Technology America.

Other articles in this series:

Related articles:


Small Bottom Ad




The post Digital camera design, part 5: Basic noise considerations for CMOS image sensors appeared first on EDN.

https://www.edn.com/digital-camera-design-part-5-basic-noise-considerations-for-cmos-image-sensors/feed/ 0 4473118 https://www.edn.com/calculate-octaves-and-decades-for-numerical-value-ratios/ https://www.edn.com/calculate-octaves-and-decades-for-numerical-value-ratios/#respond

Tue, 08 Dec 2020 21:56:48 +0000

Calculating the number of octaves and/or decades for numerical value ratios is actually a simple mathematical process.

The post Calculate octaves and decades for numerical value ratios appeared first on EDN.


The so-called frantic 50s and swinging 60s are decades meaning time spans of 10 years, but when we use that term decades in engineering, we are referring to numerical values in 10-to-1 ratios. “Ten” is one decade above “one,” “six-hundred” is one decade above “sixty” and so forth. Similarly, we have the term “octave” where “two” is one octave above “one,” “nine-hundred” is one octave above “four-hundred-fifty” and so forth.

However, when we have number pairs that are not in a convenient two-to-one ratio or a ten-to-one ratio, we need to do a little math to find their relationship in terms of octaves and/or decades.

equations to calculate octaves and decadesFigure 1 Here is the math for finding octaves and decades.

In the method of calculation from Figure 1, “x” is either octaves or decades.

For example, if we arbitrarily let “Value 1” be 394 Hz and we let “Value 2” be 17831 Hz, we find these two frequencies to be [ log(17831/394) / log(2) ] = 5.5 octaves apart and also [ log(17831/394) / log(10) ] = 1.656 decades apart from each other.

Please feel free to check these numbers on a calculator or in Excel. You will find that 2^5.5 and 10^1.656 are equal to each other and come to 45.256. You will also find that 394 × 45.256 = 17831. Although there are some rounding errors due to the numbers of significant digits we’re using, that’s not germane to this thesis.

Please also note that it does not matter which base of logarithms you use as long as you are consistent in your choice. You can use common logarithms or you can use natural logarithms, but the ratios of the logarithms will be the same.

equations show the ratios of the logarithms is the sameFigure 2 You can use common logarithms or you can use natural logarithms, but the ratios of the logarithms will be the same.

We now look at a few examples of calculating octaves and decades.

octave calculation resultsFigure 3 Here are a few examples of calculating octaves.

decade calculation resultsFigure 4 Here are examples for calculating decades.

Looking at the impacts of decades or octaves can sometimes be a bit startling; see this 2015 post on galactic oscillation.

Back in 2011, astronomers disclosed that the disk of the Milky Way is vibrating or oscillating at a frequency of 64 octaves below middle-C. You can play middle-C on the piano and it’s easy to hear, but going that many octaves down yields the impressive result that the period of the Milky Way oscillation at sixty-four octaves down from middle-C comes to more than 2 billion years.

galatic oscillation tableFigure 5 The disk of the Milky Way is oscillating at a frequency of 64 octaves below middle-C.

Even Paul Robeson couldn’t have matched this bass note.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related articles:








Small Bottom Ad





The post Calculate octaves and decades for numerical value ratios appeared first on EDN.

https://www.edn.com/calculate-octaves-and-decades-for-numerical-value-ratios/feed/ 0 4473100 https://www.edn.com/edn-editorial-advisory-board-interview-jean-christophe-eloy-multi-disciplinary-integration-is-a-design-reality/ https://www.edn.com/edn-editorial-advisory-board-interview-jean-christophe-eloy-multi-disciplinary-integration-is-a-design-reality/#respond

Tue, 08 Dec 2020 10:25:44 +0000

The mixing of design competencies poses new challenges for design engineers.

The post Multi-disciplinary integration is a design reality appeared first on EDN.


EDN and EE Times are conducting regular reviews of our editorial coverage, seeking to fill the gap between our offerings and our readers’ appetite for information. Our brain trust for this project is the EDN Editorial Advisory Board, a panel of industry luminaries—CTOs, executives, and university professors —who will help us understand where the electronics industry is heading, and unearth the knowledge the engineering and business communities need to keep abreast of during this time of rapid changes.

logo for the EDN editorial advisory board

While the integration of semiconductor devices in system-level designs has reached a whole new level and PCBs are using advanced substrates, what matters the most in this new era of multi-disciplinary integration? According to Jean-Christophe Eloy, president and CEO of Yole Développement, the mixing of different fields calls for electronics designs to be synchronized.

Regarding the mixing of competencies across the electronics design supply chain, Eloy quoted the example of image sensors: “While processing is super important in imaging designs, what about the equally critical optical design expertise?”

photo of Jean-Christophe Eloy of Yole DéveloppementThe mixing of design competencies is a new reality in the electronics world, according to Jean-Christophe Eloy.

Eloy added that the new era of multi-disciplinary integration could also lead to new ways of collaboration among design teams and how they solve distinct problems. For instance, how can temperature sensors be integrated into wearable devices serving healthcare applications? That encompasses electrical as well as optical design skills.

Here, design engineers can seek the latest data from design books like EDN. Designers can also make background checks from technical literature available on these publishing platforms. “Design publications have an important role to play in this behavioral shift in product development,” Eloy said.

Read a more detailed interview with Eloy on our sister publication EE Times.

Majeed Ahmad, Editor-in-Chief of EDN, has covered the electronics design industry for more than two decades.

Related articles:






Small Bottom Ad





The post Multi-disciplinary integration is a design reality appeared first on EDN.

https://www.edn.com/edn-editorial-advisory-board-interview-jean-christophe-eloy-multi-disciplinary-integration-is-a-design-reality/feed/ 0 4473088 https://www.edn.com/nas-successors-add-notable-features/ https://www.edn.com/nas-successors-add-notable-features/#respond

Mon, 07 Dec 2020 23:59:20 +0000

When replacing a two-bay NAS, you’ve got no shortage of options.

The post NAS successors add notable features appeared first on EDN.


In my previous post, I discussed the unexpected failure of my longstanding ReadyNAS NV+ NAS, as well as the path to recovery so that I could get the stored data off it. What I alluded to then, but didn’t yet cover in detail, was what I conclusively replaced both it and its ReadyNAS Duo “little brother” with. Therein lies the purpose of this particular piece, along with another one to follow it.

About 4.5 years ago, when researching potential replacements for my Windows Media Center-based networked television content playback setup, I picked up a diskless QNAP TS-231 on sale at Newegg for $139. SiliconDust’s HDHomeRun PVR software was, as long-time readers may recall, one of the Windows Media Center successors I had been considering; its claimed features included the ability to run natively on the same NAS whose hard drive(s) was/were being used as recording media, versus dedicating a full-blown PC to the task. And some QNAP products were among those supposedly supported by HDHomeRun PVR (as were some from WD, a 2 TByte My Cloud NAS is also collecting dust in my storage cabinet as I write this), so since the price was right, I took the plunge.

photo of the front QNAP TS-231 NAS

photo of the QNAP TS-231 NAS drives

photo of the back of the QNAP TS-231 NAS

The TS-231 has dimensions of 6.65”(H) × 4.02”(W) × 8.62”(D) (169 × 102 × 219 mm) and a weight of 2.82 lb (1.28 kg), not including the HDDs. So that must be what I replaced the ReadyNAS Duo with, right? Not quite.

While I dawdled (and to this day continue to dawdle) over retiring my now-EOL’d Windows 7-based setup in favor of something else, an even better deal came along: a QNAP TS-328 for $169.99 at Woot! in January 2019. Dimensions are 5.59” (H) × 5.91” (W) × 10.24” (D), and its net weight is 3.62 lbs.

photo of the front of the QNAP TS-328 NAS

photo of the QNAP TS-328 NAS hot-swappable drives

photo of the back of the QNAP TS-328 NAS

Why’d I buy another compact NAS, when I already had one queued up? Well, this one’s based on a quad-core 1.4 GHz Arm Cortex-A53 SoC, versus the dual-core 1.2 GHz Arm Cortex-A9 in the TS-231. It’s also got four times the RAM (2 GBytes vs 512 Mbytes). But most compelling to me is that (as you may have already discerned from the product naming) it holds up to three HDDs, versus two with the TS-231. Two- and four-drive (and even larger “x2” combination) NAS are most common, but as QNAP’s own promotional text notes:

30% of QNAP users choose to build RAID 5 array for their NAS for higher data protection, better system performance and more available storage space. The TS-328 is QNAP’s first 3-bay NAS, allowing you to build a RAID 5 array on your NAS with the fewest disks.

RAID 5 has always been my preference (with RAID 1 as my second choice for mirrored redundancy if I cared most about long-term data integrity, or RAID 0 if striped speed was more important to me), because it combines mirroring and striping in one setup. And as QNAP accurately notes, you can actually do RAID 5 using only three drives. With a 3 TByte HDD as the granular storage unit, for example, I’m able to construct a three-drive 5.34 TByte RAID 5 volume that’s not only redundant but also both faster and bigger than the two-drive <3 TByte (accounting for partition and format overhead) RAID 1 alternative in the TS-231.

But, as it turns out, the only thing I’m using the TS-328 for (at least right now) is the partitioned combo of a networked Time Machine backup destination for my Macs (using up to 3 TBytes) and Windows system backups (both daily File History and weekly Backup and Restore) using the remainder of the available space. Why? It’s because, as post-purchase research revealed, the TS-328’s long-term reliability seems sketchy. If the NAS dies and only my backups disappear, it’s no huge loss (assuming I can get backups going again on some other storage device before the system being backed up dies too!). But if I were to lose my music, photos, and other priceless files again, as almost happened with the ReadyNAS NV+? Nah, don’t want to go there again.

The TS-328 dates from April 2018 and is still sold, so I’m hopeful that the failures are just reflective of an atypical (early?) production batch (of which my unit was hopefully not part). And if the TS-328 were to die, I could just revert back to the TS-231, right? Seems reasonable.

But I happened to notice the other day, in conjunction with reassuring myself that the TS-328 was still fully supported by QNAP, that the TS-231 wasn’t. Although, according to the product support status page, QNAP still offers technical support and security updates on the device (at least until February 2022), the most recent version of the Linux-based QTS operating system available for it is v4.3.5, which dates from September 2018 (and per the support page notes, I’m assuming was last tweaked in February 2019). Here’s the problem; perhaps unsurprisingly to readers, QNAP’s (and others’) NAS are common targets of hackers. In my particular case, this backup-only NAS doesn’t require full exposure to the internet, but the myQNAPcloud service would still be a potential Achilles’ heel. And as the most recent (as I write this) QNAP vulnerability exemplifies, an attack vector could even come from elsewhere on the LAN, not just over the WAN. Are fixes for such vulnerabilities covered by QNAP’s security updates on an otherwise-EOL product? And do such fixes cover only the core operating system, or do they also encompass QNAP’s (and partners’) suite of apps? Put me down as dubious; the TS-231 is likely headed to Goodwill unused, perhaps with an EDN teardown beforehand.

Instead, taking advantage of another sale (this time at Newegg), I’ve picked up a TS-231 successor, the TS-231K, for $154 ($45 off the list price). With the obvious exception of its two-vs-three-bay design, it’s otherwise reminiscent of (albeit a step behind) the TS-328; a quad-core processor (this time based on the Arm Cortex-A15), 1 GByte of onboard RAM, etc. And, since it just launched in April of this year, it presumably has plenty of (fully-supported) life left in it.

photo of the front of the QNAP TS-231K NAS

photo of the QNAP TS-231K NAS drives

photo of the back of the QNAP TS-231K NAS

Next time, I’ll wrap up this series with a post covering two main topics:

  • what I replaced the ReadyNAS NV+ “big brother” with, and
  • what HDDs I filled both NAS with.

Until then, I welcome your thoughts in the comments!

Brian Dipert is Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related articles:






Small Bottom Ad





The post NAS successors add notable features appeared first on EDN.

https://www.edn.com/nas-successors-add-notable-features/feed/ 0 4473074 https://www.edn.com/why-and-when-software-testing-matters-in-embedded-systems/ https://www.edn.com/why-and-when-software-testing-matters-in-embedded-systems/#respond

Mon, 07 Dec 2020 13:45:34 +0000

High-level requirements like state-machine functionality and complying with standards for cybersecurity and functional safety are crucial.

The post Why and when software testing matters in embedded systems appeared first on EDN.


In the world of embedded systems, it isn’t just the technology that continues to develop and evolve. The tools and the methods used to develop that technology are maturing and improving in tandem.

In the early 1980s, I developed software for a small metrology company, applying engineering math to coordinate measuring machines (CMMs). I’d like to think that I was pretty good at it. But our development lifecycle essentially regarded production software as a sandbox. We’d start with production code, add functionality, perform some fairly rudimentary functional testing, and ship it.

In such a small company, our engineering team naturally included both software and hardware specialists. In hindsight, it is striking that while the need for extensive customer support was a given for the software we developed, there was nowhere near the same firefighting culture for the hardware it ran on.

Software development is an engineering discipline

Part of the difference between the software and hardware support is a result of the crude development process. But the sheer malleability of software and the resulting capacity for ever-increasing functionality also play a major part. Simply put, there are far more ways to get it wrong than to get it right, and that characteristic demands that it is treated as an engineering discipline.

There’s nothing new about any of this. Leading aviation, automotive, and industrial functional safety standards such as DO-178, ISO 26262, and IEC 61508 have demanded such an approach for years. But having an engineering discipline mindset is essential if you are to reap the benefits of today’s cutting-edge development and test tools, which are designed to serve such an approach.

More recently, the importance of software testing has been shown by the development of ISO/IEC/IEEE 29119, an international set of standards for software testing that can be used within any software development lifecycle or organization.

Requirements matter

Electrical system design often starts with a state machine and an understanding of the different operating modes for a particular product. Engineers can typically map that state-machine functionality to logic very quickly and easily. In case the state machine gets more complicated, it is often translated into software.

High-level requirements are essential to making sure the system functions correctly. Such requirements characterize the business logic and intended functionality and enable to evaluate whether the system does what it’s supposed to do. Best practices follow the flow from high-level requirements through analysis into coverage, and naturally, requirements traceability tools are designed to support this.

In the state-machine model, requirements that characterize each state are examples of high-level requirements. Tracing the execution path through code to ensure that each requirement is interpreted correctly is a very good way to check correct implementation.

Functional safety standards extend this to a concept of requirement traceability. They often mandate that users exercise all of their code from high-level requirements and explain and test any uncovered cases with low-level testing. Lately, the “shift left” paradigm in cybersecurity echoes this message, as the V-model illustrates in Figure 1.

diagram of a V-model product development processFigure 1 As the name implies, V-model embodies a product development process that shows the link between the test specifications at each phase of development. Source: LDRA

Test components, then test system

In any engineering discipline, it’s important to make sure that components work correctly on their own before being integrated into a system. To apply that thinking to software, engineers need to define lower-level requirements and ensure that each function and set of functions play their part. Engineers also need to ensure they present appropriate interfaces to the rest of the system.

Unit testing involves parameterizing inputs and outputs at the function and module levels, performing a review to ensure the connection between inputs and outputs is correct and follows the logic with coverage. Unit test tools can provide proven test harnesses and graphical representation connecting individual inputs and outputs to execution paths and enable their correctness to be verified.

It’s also important to understand interfaces, both at the functional and module levels. Static analysis tools can show these interfaces and connect the logic at different levels.

Find problems as early as possible

Engineers from any discipline will tell you that the earlier problems are discovered, the less it will cost to fix them.

Static analysis performs source code analysis to model the execution of a system without actually running it. Available as soon as code is written, static analysis can help developers to maximize clarity, maintainability, and testability of their code. Key features of static analysis tools include:

  1. Code-complexity analysis: Understanding where your code is unnecessarily complicated, so engineers can perform appropriate mitigation activities.
  2. Program-flow analysis: Drawing design-review flow graphs of program execution to make sure that the program executes in the expected flow.
  3. Predictive runtime error detection: Modelling code execution through as many executable paths as possible and looking for potential errors such as array bounds overflows and divide-by-zeros.
  4. Coding standards adherence: Coding standards are often chosen to ensure a focus on cybersecurity, functional safety, or in the case of the MISRA standards, either one or both. Coding standards help to ensure that code adheres to best programming practices, which is surely a good idea irrespective of the application.

screenshot examples of static analysisFigure 2 Activities like static analysis are an overhead in the early part of the development lifecycle, but they pay dividends in the long run. Source: LDRA

Developing code of adequate quality

It’s no surprise that higher quality engineering products are more expensive. Adhering to any development process comes at a cost, and it may not always be commercially viable to develop the finest possible.

Where safety is important, functional safety standards will often require an analysis of the cost and the probability of failure. This risk assessment is required for every system, subsystem, and component to make sure that proportionate mitigation activities are performed. That same principle makes sense whether systems are safety-critical or security-critical. If you test every part of the system with the same level of rigor, you will over-invest in parts of your system where the risk is low and will fail to adequately mitigate failure where the risk is higher.

Software safety practice starts with understanding what will happen if the component or system fails and then tracks that potential failure into appropriate activities to mitigate the risk of it doing so. Consider, for example, a system that controls an airplane’s guidance where failure is potentially catastrophic. Rigorous mitigation activities must be performed at the sub-condition coverage level to ensure correct code generation.

Contrast that with an inflight entertainment system. If this system fails, the aircraft will not crash, so testing an inflight entertainment system is less demanding than a system where there is the potential for immediate loss of life.

The malleability of software is both a blessing and a curse. It makes it very easy to make a system do practically anything within reason. But that same flexibility can also be an Achilles’ heel when it comes to ensuring that software doesn’t fail.

Even in the commercial world, while not all software failures are catastrophic, they are never desirable. Many developers work in the safety- and security-critical industries, and have no choice but to adhere to the most exacting standards. But the principles those standards promote are there because they have been proven to make the resulting product function better. It therefore makes complete sense to adopt those principles in a proportionate manner regardless of how critical the application is.

Despite the confusing plethora of functional safety and security standards applicable to software development, there is far more similarity than difference between them. All of them are based on the fact that software development is an engineering discipline, demanding that we establish requirements, design and develop to fulfill them, and test early against requirements.

Adopting that mindset will open the door to an entire industry’s worth of supporting tools, enabling more efficient development of higher-quality software.

Mark Pitchford, technical specialist with LDRA Software Technology, has worked with development teams looking to achieve compliant software development in safety and security critical environments.

Related articles:


Small Bottom Ad




The post Why and when software testing matters in embedded systems appeared first on EDN.

https://www.edn.com/why-and-when-software-testing-matters-in-embedded-systems/feed/ 0 4473056 https://www.edn.com/how-image-acquisition-works-in-iris-recognition-applications/ https://www.edn.com/how-image-acquisition-works-in-iris-recognition-applications/#respond

Fri, 04 Dec 2020 18:36:50 +0000

An FPGA-based solution highlights the challenges and tasks involved in designing a capture device for iris recognition.

The post How image acquisition works in iris-recognition applications appeared first on EDN.


Iris recognition is catching up to other popular biometric applications, such as fingerprint and facial recognition, in worldwide usage. It’s a highly accurate technology because human iris patterns don’t change with age and are more challenging to counterfeit. However, the iris’ qualifying image is also more challenging to capture than a face or fingerprint.

Generally speaking, the implementation of an iris biometric system has three major components: the image acquisition device (the iris camera); the biometric software for template creation, enrollment, and matching; and the database management platform commonly used in passport control, airport kiosks, access control, and law enforcement applications (Figure 1). This article focuses on the design of the image acquisition device.

diagram of the building blocks of an iris biometric systemFigure 1 An iris biometric system has these three major building blocks. Source: Videology Imaging Solutions

Let’s look at some of the key challenges in the design of a capture device for iris recognition.

Target distance

The larger the distance from the camera to the subject’s eyes, the more complex and expensive the capture device will be. Consequently, iris cameras are divided into categories based on their capture range. Here, 10-30 cm is the most common range. The image sensor characteristics and the modulation transfer function (MTF) of the system are critical to further subcategorize the camera as means for enrollment and/or verification. The former requires a higher MTF.

Variety of eye colors

Since image contrast is used to extract the patterns of the iris, the capture light wavelength has to support a broad range of colors. Thus, the spectral response of the sensor and the illumination wavelength must be chosen with considerations for power consumption, IEC-62471 compliant eye safety radiation, and synchronization of the pulsing light with the integration time of the camera.

Motion blur

The device has to tolerate some degree of target motion and that’s where the capture volume designed in the optics of the device plays an important role, particularly the depth of field of the camera. The aperture of the lens, the amount of light radiated during capture time, the light wavelength, and the properties of the lens determine the depth below and above the target distance where the camera stays in focus.

Match-ability and interoperability

A good iris image acquisition metric compares images of the same subject captured with the same camera and computes the percentage of images that successfully match. This concept is known as match-ability. When images from the same subject taken with cameras from different vendors are matched, the concept is known as interoperability.

The percentage of images that match is an indicator of the success of the camera under test to work along other iris cameras in the same system. Meanwhile, the hamming distance between images is the metric used to measure match-ability. The hamming distance has a range from 0 to 1. The closer to 0, the more identical the images are. The camera settings for image enhancement are tuned to set the best conditions under which the captured images achieve high scores of match-ability and interoperability.

Iris recognition standards

There are two ISO standards that set the requirements for iris image quality and exchange of iris image information: ISO/IEC29794-6 and ISO_IEC_19794-6. There is no official biometric standard for iris camera selection equivalent to, for example, the FBI’s Appendix F for fingerprints. However, NIST provides guidance to standardize and develop this technology through a significant number of publications and studies.

A key component of iris recognition is, of course, the biometric software that has the algorithms to extract the unique patterns from the iris image to create an iris template. The iris template is the instrument used for enrollment and matching. Currently, template creation is proprietary, meaning that a template file created with one biometric software can only be encoded and decoded by that software package, which is exclusive to the company that owns the software.

Algorithm development for iris image quality grading, template creation, enrollment, and matching is a whole field in itself. Some companies do both the image acquisition device and the biometric software, while others do one of them. As this technology keeps growing, the opportunity expands for image acquisition devices and access control embedded platforms that can work with multiple third-party biometric software products.

The iris camera requirements for enrollment and verification (matching) is different. The ISO standards mentioned earlier do apply to enrollment cameras; however, verification cameras are not required to follow the standards. Aiming to develop a low-cost enrollment camera that does both—enrollment and verification—offers an alternative to the integrators of iris recognition solutions because they can then architect a solution with iris cameras and a software platform that comes from different vendors.

It would be a budgetary solution for the small integrators who seek access control for a small factory or school. At the same time, it would be a viable procurement strategy for a large integrator, such as the government or the military.

The main task of the acquisition device is to locate the iris of a person within a live video stream and to deliver it to the biometric software for segmentation, which, in turn, creates an iris template for either enrollment or verification. Consequently, in addition to the capture volume specifications, the acquisition device’s on-board processing has to perform a combination of tasks at runtime, which can be carried out by an Arm processor, an FPGA, or an embedded Linux board. That’s how some self-contained systems function in access control applications.

An FPGA-based solution

Here is an example of the iris recognition tasks involved in an FPGA-based solution.

Focus assessment

This determines which frames are worth processing while discarding the rest.

Iris location

There are various techniques to carry out this task. A popular one is to use a circular edge detector to locate the coordinates of the iris within the image.

Image cropping for segmentation

Once the coordinates of the iris are located, the image of the corresponding eye is cropped from the full frame and delivered to the biometric software for image quality assessment and subsequent template creation.

diagram of process for FPGA board to perform iris acquisitionFigure 2 This diagram shows how an FPGA board performs the iris acquisition process. Source: Videology Imaging Solutions

Parallel computations, control of the video pipeline, and power consumption are important considerations for analyzing the image. Some iris cameras use a multi-camera solution, in which one camera looks at the face to locate the eyes within the capture volume, which then triggers another image sensor for the iris capture. In contrast, other cameras use only one sensor for iris capture.

These choices depend on the design tradeoffs and camera category that a designer wants to target along with other design considerations such as distance range, enrollment vs. verification, and software development kit (SDK) integration.

In a recent national biometric rally conducted by the Department of Homeland Security, the whole transaction time—from the moment a person stands in front of the iris camera to template creation of both irises—was targeted to no more than 8 seconds, with a goal of no more than 5 seconds. There are external factors that affect the transaction time, some of which are out of the system’s control, including eye occlusion, gaze rotation, a person’s height, and eye diseases. Therefore, the acquisition device should analyze the images very fast, if possible, so that the biometric software doesn’t get saturated with useless samples.

Board designs for iris recognition

The acquisition device is fundamental to the iris recognition process. Therefore, as the iris recognition industry gains traction in the biometrics field, the opportunity grows for accurate and cheaper acquisition devices that can carry out the search for the coordinates of the iris and apply image enhancement techniques on-board.

At the same time, however, the robustness of the iris biometric software is crucial to grading, segmenting, and matching an iris image. Through the development process, an iris biometrics software package with multiple grading parameters is critical, allowing the acquired images to be analyzed and tuned to achieve the highest possible image quality grade, consistent iris dilation, consistent focus, and high contrast.

There is simply no substitution for speed. The speed at which the camera locates the coordinates of the iris and delivers the iris image to the biometric software for segmentation is no less critical than the quality of the captured image.

W. Luis Camacho is a senior electrical engineer at Videology Imaging Solutions and the architect of the IDentity-1 iris scanner.

Related articles:


Small Bottom Ad




The post How image acquisition works in iris-recognition applications appeared first on EDN.

https://www.edn.com/how-image-acquisition-works-in-iris-recognition-applications/feed/ 0 4472933 https://www.edn.com/oscilloscope-probes-employ-optical-isolation/ https://www.edn.com/oscilloscope-probes-employ-optical-isolation/#respond

Fri, 04 Dec 2020 18:23:09 +0000

TIVP isolated probes provide accurate differential measurements on reference voltages slewing ±60 kV at 100 V/ns or faster.

The post Oscilloscope probes employ optical isolation appeared first on EDN.


TIVP series isolated probes from Tektronix provide accurate differential measurements on reference voltages slewing ±60 kV at 100 V/ns or faster. Leveraging optical isolation, these second-generation IsoVu probes virtually eliminate common mode interference. Further, they are about 1/5th the size of the first generation, allowing easy access to hard to reach measurement points.

photo of the Tektronix TIVP probes and an oscilloscope

IsoVu Gen 2 probes offer bandwidths of 200 MHz, 500 MHz, and 1 GHz. With their shielded coaxial cable and isolation, they provide high bandwidth and a differential voltage range of ±2500V. The new probes are more sensitive, with less noise at ±50V for greater visibility and voltage sensitivity in wide bandgap measurements. Additionally, the probes have improved DC accuracy, enhanced gain accuracy over the full input range, and better temperature drift correction. Common mode rejection ratio at DC, 100 MHz, and 200 MHz is 160 dB, 100 dB, and 100 dB, respectively.

Prices for the IsoVu TIVP series of isolated oscilloscope probes start at $9000. Accessories include probe tips and adapters, soft carrying case, and bipod holder.

IsoVu TIVP series product page

Tektronix, www.tek.com

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

Related articles:


Small Bottom Ad




The post Oscilloscope probes employ optical isolation appeared first on EDN.

https://www.edn.com/oscilloscope-probes-employ-optical-isolation/feed/ 0 4473044 https://www.edn.com/converters-tout-ferrite-bead-compensation/ https://www.edn.com/converters-tout-ferrite-bead-compensation/#respond

Fri, 04 Dec 2020 18:03:19 +0000

Two buck converters from Texas Instruments enable designers to minimize power supply noise and ripple.

The post Converters tout ferrite-bead compensation appeared first on EDN.


Two buck converters from Texas Instruments, the TPS62912 and TPS62913, enable designers to minimize power supply noise and ripple. These DC/DC switching regulators with integrated ferrite-bead compensation offer low noise of 20 μVRMS for frequencies ranging from 100 Hz to 100 kHz and output-voltage ripple of just 10 μVRMS. According to TI, this allows engineers to eliminate one or more low-dropout regulators from their designs, reduce power losses by up to 76%, and save 36% of board space.

Texas Instruments PR image of the TPS62912 buck converter

By integrating compensation, the TPS62912 and TPS62913 use the ferrite bead already present in most power supply systems as an effective filter against high-frequency noise, reducing the power supply output voltage ripple by approximately 30 dB. Both the 2-A TPS62912 and 3-A TPS62913 provide a power supply rejection ratio of 65 dB at up to 100 kHz and have an output-voltage error of less than 1%.

Pre-production quantities of the TPS62912 and TPS62913 are available now, only on TI.com, in 2×2-mm, 10-pin QFN packages. Prices for the TPS62912 and TPS62913 start at $1.06 and $1.16, respectively, in lots of 1000 units. Evaluation modules are also available for $49 each. TI expects both devices to be available in volume production in the first quarter of 2021.

TPS62912 product page

TPS62913 product page

Texas Instruments, www.ti.com

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

Related articles:


Small Bottom Ad





The post Converters tout ferrite-bead compensation appeared first on EDN.

https://www.edn.com/converters-tout-ferrite-bead-compensation/feed/ 0 4473038