Free Shipping logo FREE SHIPPING £75+
50 years of Telonic Instruments Ltd logo CELEBRATING 50+ YEARS
Price Match logo PRICE MATCH GUARANTEE

Using probes with a Spectrum Analyser

Posted on: June 17th, 2021 by Doug Lovell

Using a Passive Oscilloscope Probe with a Spectrum Analyser

Solution: Spectrum Analysers are typically used to measure radio frequency (RF) signals. The signals are usually delivered to the RF input of the analyser with an antenna, magnetic probe, or using a cable with a matched impedance. This minimises impedance mismatching which lowers reflected power and provides the cleanest measurement. This is not always an acceptable connection scheme. Especially in circuits that are highly susceptible to loading when attached to low impedance inputs, like those on most Spectrum Analysers.
This application note covers using a passive probe, typically used with an oscilloscope, with a spectrum analyser. We highlight some of the advantages and trade-offs with this technique as well.
Most analysers feature a 50 Ohm input impedance. In fact, many oscilloscopes with analogue bandwidths above a few hundred MHz also feature a 50 Ohm impedance setting. This lower impedance enables better performance at higher frequencies but can significantly load a circuit with higher impedance.
In this application note, we will use an RF signal source to deliver a -10dBm signal at 1GHz (CW Sine Wave) to a spectrum analyser, using a passive 1.5GHz oscilloscope probe.

Here is a screen capture of the signal directly connected to the input of the spectrum analyser using coaxial cable and BNC adapters:

Note that the marker above shows the peak at 1GHz with an amplitude of -10dBm. Now, we connect a 1.5GHz Passive Probe (Rigol RP6150 Passive probe) to the input of the spectrum analyser. The RP6150 is designed to be a 10:1 probe when connected to 50 ohms.
Using a probe with an impedance greater than 50 ohms acts as a voltage divider for signals being delivered to the spectrum analyser. This decreases the voltage to the input and effectively acts as an attenuator. It also has the advantage of lessening the circuit loading that can be caused by connecting the 50 ohm spectrum analyser input directly to the circuit.

Here is the same signal but instead of a direct connection to the RF input, we are using an RP6150 probe to detect the signal.

Note that the marker now shows -30dBm for the amplitude. This is due to the probe attenuation factor.
Let’s take a closer look at that probe. Recall that power is the square of the amplitude. Therefore, you can calculate the probe power ratio by simply squaring the probe attenuation factor.

Some common probe attenuation ratios can be found using Table 1.

Table 1: Probe Impedance to dB
*With 50 Ohm Input to Spectrum Analyser

Now, we can easily calculate the expected measured power using the equation below: Measured Power (dBm) = Signal Source Power (dBm) – Probe Attenuation ratio (dB)
So, if our Signal Source Power is -10dBm, and the probe attenuation ratio for our RP6150 Passive Probe is 20dB, we would expect to read -30dB on the spectrum analyser as we see in the above screen capture.
For convenience, we can then use the spectrum analysers internal reference setting to adjust for the attenuation of the probe.
Simply press AMPT and set the Ref Level to the probe attenuation ratio in dB. This is a scalar factor that will remove the additional attenuation from the displayed value and give the corrected power value.

Products Mentioned In This Article:

  • RP6150 please see HERE

Active loads and the DP800, DP1000 series of Power Supplies

Posted on: June 17th, 2021 by Doug Lovell

Active loads and the RIGOL DP800 and DP1000 Series 1. 1. Introduction

The RIGOL DP800 series and DP1000 series are programmable linear DC power supplies. They can only provide power for a pure load that does not have the ability to output a current.
Active loads, such as those that can provide power (batteries, solar cells, etc..), should not be used with the DP800 series or DP1000 series power supply. Active loads can lead to instability in the power supply control loop and may damage the powered device.
Connecting the power supply to active loads is not recommended.

Figure 1 Improper use of DP800 and DP1000 Series Power Supply

 

 

 

 

 

 

 

2. Detailed Technical Description
The RIGOL DP800 series and DP1000 series programmable linear DC power supply can only work in the first quadrant (source positive voltage and a positive current) or the third quadrant (source negative voltage and a negative current).
They cannot work in the second quadrant (negative voltage, positive current.. an adjustable load of negative power) or the fourth quadrant (positive voltage, negative current.. as used for a battery discharge test).
When the load itself is a source and the power supply is required to work in the second quadrant as an adjustable load, the control loop may lose control and the power supply will output an uncontrolled voltage. This could damage or destroy the load.

When the power supply works in the fourth quadrant (e.g., used in a battery discharge test), the control loop is also unstable and will quickly drain the battery. This can result in dangerous conditions, including damage to the battery, power supply, and a very high risk of fire and explosion.

2.1.Power Quadrants in more detail
The Cartesian coordinate system is a common representation of power supply output capabilities. The horizontal axis represents voltage, and the vertical axis represents current. The distributions of the four quadrants of the power supply as shown in Figure 2.
The first quadrant: the power supply provides a positive voltage and a positive current (the direction of the current flows from the power supply to the load).
The second quadrant: the power supply provides a negative voltage and a positive current (the direction of the current flows from the power supply to the load).
The third quadrant: the power supply provides a negative voltage and negative current (the direction of the current flows from the load to the power supply).
The fourth quadrant: the power supply provides a positive voltage and a negative current (the direction of the current flows from the load to the power supply).

Figure 2 Distributions of Power Quadrants

 

 

 

 

 

 

 

 

 

 

2.2.Principle of DP800 and DP1000
Here is a block diagram of DP800 and DP1000 series:

Figure 3 Block Diagram of DP800 and DP1000

 

 

 

 

 

 

 

If a current is forced into the supply (I.E. sinking the current), it will directly affect the working status of the MOS transistor and result in instability within the control loop of the power supply as shown in Figure 4.

Figure 4 Current Anti-irrigation Diagram

 

 

 

 

 

 

 

In addition, the DP800 and DP1000 series power supply outputs do not have an output relay. When a specific output channel is disabled (power off) the output voltage is set to 0V and is regulated by the control loop.

For charging batteries with the DP800 and DP1000 series power supplies, we recommend using constant current mode and implement the circuit shown in Figure 5. The external diode can prevent the flow of current into the supply and prevent damage.

Figure 5 Application Program of Battery Charge Test

 

 

 

 

 

Products Mentioned In This Article:

  • DP800 Series please see HERE

Converting DP800 Record *ROF Files

Posted on: June 17th, 2021 by Doug Lovell

Reading Rigol DP800 Record (*.ROF) Files with Excel 

Solution: The Rigol DP800 series of power supplies have the option to data log the output voltage and current using the Record feature.

This application note covers how to convert the binary file format native to the record file type (*ROF) to decimal using HxD (A hex-to-decimal software package) and the ReadDPROF file, a worksheet created using Microsoft Excel 2010.
The end of this document describes the format of the data in the *ROF file and the Excel functions that were used to convert each data point to decimal.
Steps:
1) Configure the DP800 outputs and Devices (DUTs) for your experiment
2) Insert a USB stick (FAT32 format) into the USB slot on the back panel of the instrument

3) Enable the record feature by pressing the (…) button on the front panel

– Set the time per sample to record by pressing Period and use the keypad or wheel to increment the time

– Select the destination by pressing Det > Select Browser to highlight the external USB (D:) drive

– Press Browser to enter the D: > Press Save and input the file name

– Press OK when finished entering the filename

4) Enable the Recording by pressing SwitchOff. It will turn to SwitchOn when recording is active.

NOTE: The instrument is collecting data as soon as the Recording is enabled.
5) Enable the outputs or run the output profile using the Timer function

6) Once the test is completed, press (…), and disable the Recorder. As soon as it is disabled, the Record mode will ask if you wish to save the data. Press OK to save.

7) In this experiment, I had the following static output values for the duration of the test:
CH1V = 2.00V CH1A = 0.02A CH2V = 2.08V CH2A = 0.18A CH3V = 1.50V CH3A = 0.33A

8) Remove the USB stick and insert it into a computer. If you open the *ROF file (res1.ROF is use d in this example) you will see the binary values:

9) Open the ROF file using hex to decimal conversion software. In this example, I am using HxD, as shareware program from https://mh- exus.de/en/hxd/
10) Here is the data in HxD

11) Configure HxD bytes-per-row to 4:
Before:

After:

12) Set Visible Columns to Text

13) Now the data should show the Offset and Hex Values

14) Click Export and select

15) Now, open the ReadDPROF.xlsx workbook and select the RawDataFile Tab (at the bottom):

16) Select Data: Import Text, set file type to ALL, select the *RTF file (this is the rich text conversion file from the HxD program)

17) Select Delimited and Next

18) Deselect Tab, select Space , and Finish

19) Select Cell A1 for import and press OK

20) Now, the formatted data will be transferred to the Excel Sheet

21) Click on the Calculations tab to see the reformatted data

The raw data format (*ROF) returns the record period, number of record steps, the Voltage, and Current of all channels.
The calculations tab of the Excel sheet is designed for use with the three channel DP800s and is only formatted for the first four data points. You can the final row of cells to cover all of the data points for your application as well as re-label the channels.

Each data point in the *ROF file is 4 bytes long. To calculate the actual decimal value, the sheet:
– Reorders the bytes (AA BB CC DD to DD CC BB AA) using the Excel MID function
– Concatenates the bytes using the CONCATENATE Excel function – Converts hex to decimal using the Excel HEX2DEC function
– Divides the decimal conversion by 10,000.

Products Mentioned In This Article:

  • DP800 Series please see HERE

Temperature Measurement App Note

Posted on: June 17th, 2021 by Doug Lovell

Every electrical and electronic device brought to market has to be classified and tested based on international guidelines, rules, critical values and safety conditions. Each instrument or device that passes is labelled with a special mark. In most of European countries, this will be declared by CE mark. In the US, there is an FCC approval as well as sometimes a UL listing. This identification has been established to be a standardised quality benchmark of design and manufacture. To get this certification, many tests have to be completed before a product can even be sold. One of the most common is EMC tests including interference testing and electromagnetic compatibility. Additionally, there are commonly power and environmental test requirements. Depending on the device itself, there are additional mechanical tests related to vibration or climate to be done. HALT testing (Highly Accelerated Life Test) combines vibration and temperature profile tests.
HALT is applied to the electrical and electronic components during their design phase. The devices will be stressed to achieve an accelerated aging. This is done to find possible problems that can occur during the lifetime of the product. The device will be stressed beyond the specified maximum specs (electrical and mechanical) in order to predict failure modes and future issues. Problems found based on such stress gives designers an opportunity to make improvements that can impact the long-term quality and vitality of a product before it even comes to market.
With this testing it is possible to detect possible quality issues already during the early design phase. This saves time and money, because the later in the developing phase problems are found the more expensive and time consuming the corresponding design changes. So HALT testing helps to reduce the design time and therefore reduce the time to market and also reduce the cost of the final product, so it is a very important test during the design phase of a product.
To conduct a HALT test, the device is positioned on a vibration table inside of a climate chamber. The test normally would consist of cycling of different steps, for example: cool down then heat up in a temperature change test, vibration test, or a combined stress tests. During all these test steps, the device has to be controlled and monitored with measurements stored permanently for the duration of the complete test time. Done correctly, a HALT test cycle can be an automated series of evaluations.

 

 

 

 

 

 

 

 

 

Temperature Measurements
Let’s look at the temperature measurement which is normally done at different locations on or around the device under test (DUT) to get a complete picture of the device’ temperature distribution during the HALT test. To be able to measure such a temperature profile within a climate chamber, you need a robust and accurate array of sensors. In this case a thermocouple is the best solution. A thermocouple is simple and robust with a broad temperature range. They are also cheap and easy to handle. The disadvantage of thermocouples is that you need a reference junction to improve accuracy. The compensation can automatically be done by an instrument like the MC3065 DMM module that is an available module for the M300 test system.

Most of the M300 switching cards include a CJC (Cold Junction Compensation) so you can measure the absolute temperature of each thermocouple. This can improve accuracy without an external reference such as ice water for comparison. There are 3 ways to provide a reference source for temperature measurements on the M300 system.

First, select a fixed reference for the highest measurement precision. This is traditionally done by putting the thermocouples in an ice bath to keep the junctions where the thermocouples meet the copper from the measurement system at a constant and known temperature. The ice bath maintains a very precise temperature as it melts to keep the liquid exactly at the melting point. Because this method can be difficult to maintain there are 2 other ways to approximate compensation for the additional junctions.

The most convenient method is to use an internal CJC. The M300 terminal blocks have an isothermal area that includes a temperature reference that can be measured by the DMM. Because the junctions between the measurement system and the thermocouple that we don’t want to measure are on the terminal block we can use the local temperature to predict offset voltage from these junctions and improve accuracy of our measurements. While not as accurate as an ice bath, this is a reliable method for correcting some measurement errors.

Lastly, the M300 system allows users to set one of the channels as an external reference. Implement this method with a more precise RTD or Thermistor that works well in your environmental range. This can provide a more accurate reading of cold junction temperatures than the internal CJC, but will be less accurate than an ice bath because there is more temperature differential across the terminal blocks that create offset.

Also the nonlinearity of the thermocouples is not a big problem because the build in multimeter takes over the calculation from voltage into temperature for all different types of thermocouples (K, J, E, etc.) Each type differs in accuracy and useful range.

Thermocouples utilise the Seebeck Effect. The Seebeck Effect is based on the physical effect that two different metals that are connected together generate a voltage related to the temperature differential of the pair of junctions of the metals. This voltage is very low, for example for a Thermocouple Type K might change 40 uVolts for every 1°C. Therefore you need a voltmeter with high resolution and accuracy like in the M300 6 ½ digit system.

For the complete test it is also necessary to be able to measure more than just the temperature. Other parameters such as voltage, current, resistance can be relevant. It is also very useful to have math functions included in the system because you can define different types of sensors to measure parameters such as pressure, movement, and more.

 

 

 

 

 

 

 

 

 

Rigol’s M300 Data Acquisition System
Rigol’s M300 is a complete test solution with build in capability to measure a wide range of physical and electrical parameters. Measurements of up to 256 points are possible per mainframe for temperature, voltage, current, or resistance with a common low connection, or 128 true 2 wire differential signals.
The differential modules can also be configured for automatic 4 wire resistance measurements.
Beside the measurement part, it is also possible to operate
as a control unit. With modules containing analogue and digital outputs as well as external source switching, we are able to react programmatically to process limits in order to shut down, reconfigure, or alert the engineer of a change during the automated test.
• Integrated CJC (cold junction reference) for reference temperature of Thermocouples
• Build in tables of voltage for temperature conversion
• High resolution voltage measurements down to 100nV
• Digital IO cards with up to 64lines
This is just a small selection of the build in functionality of the M300 system
There are many industrial application areas where it is absolute necessary to do these measurements described above as efficiently as possible including:
Consumer Electronics – even including Washing Machines, Dishwasher, stovetops but also everything from phones and computers to toasters and doorbells.
Automotive – Validation in temperature chambers of electronic parts.

Aerospace, Transportation, Telecomm – climate/temperature overview for components, critical systems, engines, Base-Stations, etc.
Power Plants – temperature profile of cooling towers, transformers, relays, and fuses

To make the configuration and measurement much more effective and easy to use, the M300 comes with a PC based data logging Software called UltraAcquire.
UltraAcquirePro is the extended version of the standard software tool and allows use of more than one mainframe in a single test configuration enabling extended graphing capability, data storage, and more.

An example of a typical configuration that includes voltage measurements, resistance measurements, frequency measurements and temperature measurements with different sensor types (PT100, Thermocouple Type K, Type E, Type N) can be seen in Figure 1. With the software you can define how each sample is measured, how fast the scan should be and how quick or how accurate each measurement will be and how the data should be stored.

Scans can also be easily configured from the front panel as shown in Figure 2. Monitor up to 4 channels in real time on the front panel during continuous scans (as shown in Figure 3) without connecting the instrument to any software or computer. This is great for industrial or hard to reach signals where local computer control is unavailable or undesirable.

Figure 1: Configuration example using UltraAcquire software

Figure 2: Switch configuration on the M300 display

Figure 3: Channel monitoring on the M300 display

 

 

 

 

 

 

 

 

Products Mentioned In This Article:

  • MC3065 please see HERE
  • M300 Series please see HERE

M300 System Measurement Primer

Posted on: June 17th, 2021 by Doug Lovell

Introduction to data acquisition with Rigol’s M300

The M300 Data Acquisition System is designed for automated multichannel measurement and switch applications. This includes environmental measurements like temperature for burn in tests as well as voltage, frequency, or resistance measurements from a wide variety of sensors used from research to production test. These applications are actually very common across a number of industries from electronics to research including a large component of environmental applications.

Rigol’s M300 system will cover many of these applications at a considerable price/performance advantage versus traditional solutions. Keys to successful system implementation for your application revolve around proper upfront design and planning. These questions include:
What modules and accessories do I need?

How fast will the system go?

What accuracy should I expect on my measurements?
Rigol’s solution provides a production quality alternative at an impressive performance and value. Our large format display and advanced software capabilities allow users from technician level to test engineer get the most out of the system with ease.
Key Applications
CJC Compensated Temperature
Temperature measurements is one of the most common configurations for M300 systems. Large channel count temperature measurements are used in research to industrial settings wherever environmental chambers are used or any kind of lifetime/burn in testing is done. Using 64 channel MUX modules, a single M300 mainframe can scan up to 256 channels (4 64 channel modules and 1 DMM module in a system). For temperature measurements, make sure to get the terminal block accessories since they have the CJC reference that will improve accuracy.

Voltage and 2-Wire Resistance
Voltage and 2-Wire Resistance will usually follow the same configuration as a basic temperature scan. The 64 channel cards only switch the high signals. Some systems may require switched LOs if the signals cannot be all ground or common referenced. For these cases, select the 32 channel MUX module (MC3132) or the 20 channel MC3120.

4 Wire Resistance Measurements
4 Wire Resistance Measurements are for higher accuracy and precision. They are more common in scientific and research communities, but can also be used for very sensitive temperature measurements with RTDs. Use either the MC3120, MC3132, or MC3224. The MC3324 adds 4 current measurement channels, but the MC3120 or MC3132 is more cost effective if only 4 wire measurements are required.

Current Measurements
Large systems of current measurement are not very common, but current measurements are an important part of many test systems. The MC3324 has 4 current channels mixed with 20 voltage channels in a normal configuration. Current measurements are shunted when not connected to the DMM so that the source does not see an open circuit.

Matrix Applications
Matrix applications are all about signal routing as there are no measurements. The signal in one column can get routed to any row with one switch. It is also possible to connect one column to multiple rows or even to another column if needed using multiple switches. Configuration on these systems is usually tougher because of their general size and the complexity of connecting between cards using the backplane or external jumpering. Accuracy is the key for these systems. Often, the customer will be interested in low voltage offset, high voltage, high current, or even bandwidth. Refer to the module specifications for these – remember the matrix card does not connect to the DMM, so those specifications are not relevant.

Mixed and Multifunction Applications
These applications require a mix of the capabilities above or some of the more specialised DAC, Totaliser, DIO, or actuator functions. Most applications will not be that varied, but some will be largely multiplexer systems with a few DACs or current measurements mixed in. Simply make sure you have the right number of channels for each capability and consult with applications support to be sure.

Actuator Systems
These applications usually require higher voltage, current, or power to external systems so they use the Actuator module MC3416 instead of more standard modules. Each card has 16 SPDT channels that do not get connected to the DMM. Each channel is set to connect the COM to the NO or NC position. One of them is always connected to the COM.

Rigol’s M300 Provides all the necessary tools for configuring your automated acquisition system for performance and accuracy at an unprecedented value.

Products Mentioned In This Article:

  • M300 Series please see HERE
  • All channel cards are listed as optional accessories on M300 models.

SiFi Technology in Arb Wave Creation

Posted on: June 17th, 2021 by Doug Lovell

Introduction to Waveform Generator Technology

Traditional function and arbitrary waveform generators have for many years been built on one common technology – DDS or Direct Digital Synthesis. DDS allows an instrument to create waveforms by tracking the phase of a reference clock and outputting the closest sample to the desired signal at each output sample time. DDS has enabled quality performance at a reasonable price for generations of function generators.
Today, new technologies are emerging that enable instruments to utilise both the advantages of DDS while improving signal fidelity and usability in more applications than ever before. Technologies like Keysight’s improving signal fidelity in waveform generators. SiFi technology was created for Rigol’s latest arbitrary waveform generator family, the DG1000Z series. These instruments combine the true point to point waveform generation of arbitrary signals and redesigned output hardware to create arbitrary waveforms with flexibility and accuracy not available a few years ago. Combine this with the available deep memory and SiFi technology enables emulation of precise arbitrary signals over longer periods without losing fidelity.

Understanding DDS or Direct Digital Synthesis
The DDS method uses phase to determine the correct output over time. Let’s look at an example. Assume we have an 8192 point arb that we want to play back at 6.25 kHz. We load an arbitrary waveform made up of 400 cycles of a Sine wave. Therefore, we should have a fundamental frequency of 2.5MHz. The DDS generator assigns a phase value to each point in the wave. The first point is 0 degrees. Each point after that add an increment of 360 degrees/8192 allowing for all the points to be played in a period and the first point to be up again when it returns to 0 degrees. That increment is approximately 0.044 degrees. Driven by the clock source (often a PLL) the instrument essentially measures its phase from start every 5 ns (the instrument has a 200 MSa/sec update rate – or once every 5 nsec) and chooses the closest phase value to select from the arb table. In this example, each 5 ns represents 360 degrees/ (160 us / 5 ns) = 0.01125 degrees. Therefore, the arbitrary waveform looks like figure 1 in the UltraStation software and then the actual output values that are selected over MHz fundamental frequencies are shown in figure 2.
What is worth noting about the output is that even though we are able to output samples much faster than is required we have created some distortion. Namely, some of the points in the arb, which are all evenly spaced, are repeated for 10 ns and some will be repeated for 15 ns. The lack of smooth, continuous changes created by the file’s quantisation of the sine wave causes this distortion. The distortion is increased significantly when the playback period is adjusted slightly because the DDS algorithm is forced to make tougher decisions about which point to output since the ideal output is now further from the available points which were chosen for the initial playback period. This is critical because it is the careful sampling to generate the correct, high fidelity arbitrary signal which is the time consuming and difficult task. Using DDS, engineers who want high fidelity signals must go back and resample, recreate, and reload an arbitrary waveform whenever they want to tweak the playback period. DDS forces engineers to choose between convenient and efficient signal generation or high fidelity and accuracy during playback.

Figure 1: 400 cycles of Sine wave in an arbitrary waveform shown in Rigol UltraStation Software

Figure 2: Arbitrary wave data table showing DDS algorithm for playback

 

 

 

 

 

 

 

 

SiFi technology overcomes this basic effect on signal integrity with a new architectural approach. Let’s take the same signal and example and test it in SiFi mode. Here we load the same arbitrary wave. We simply set the output sample rate to be 8192 points * 6.125 kHz = 51.2 MSa/sec. Now, after changing that one setting we investigate the output of the signal with a spectrum analyser. The data is overlaid with the DDS mode data in Figure 3. To create this spectrum we used Max Hold on each trace while we changed the playback frequency for DDS and the output sample rate for SiFi to create fundamental frequencies between 1 and 2.5MHz. As we adjust the playback parameters in real time, DDS mode creates signal distortion at various frequencies across the 2-10 MHz band shown in yellow. Using the same exact arbitrary waveform a simple switch to SiFi mode creates much more even waveforms with significantly higher signal fidelity shown in purple.
This is a simple example of the difference between the 2 architectures, but even advanced users may be unaware of the trade-offs they are making with a traditional signal generator. Most users would assume that a 30 or 60 MHz arbitrary generator is capable of a nearly perfect 1 MHz sine wave. It all depends on the importance of signal fidelity to the application at hand. After all, many engineers look at output sample rate as a key specification but it doesn’t tell the whole story. In the example we just did, the DDS wave was being output at 200 MSa/sec while the SiFi wave was being output at about 50 MSa/sec. Still, the SiFi wave produced a much cleaner signal. The more complex the arbitrary waveform the more difficult it becomes to understand the impact of the sampling technology. Artefacts from this resampling can have profound impact on the frequency content of a true arbitrary wave and there is no way to easily separate the real wave from the sampling artefacts. This also means that buying a DDS waveform generator with a higher output sample rate invariably alters the frequency components of the signal even when playing the same arbitrary file. With SiFi technology this is not case.
Signal fidelity is critical to design engineers using waveform generators in their testing. Using a generator with SiFi technology improves the accuracy of waveforms you reproduce by allowing the engineer maximum flexibility in setting the output rate of their arbitrary waveform.

Figure 3: Comparison of 1-2.5 MHz Sinusoidal arbitrary waves. Yellow is DDS generated. Purple is SiFi technology.

 

 

 

 

 

 

 

 

 

Enabling more functions and waveform types
Improved signal fidelity is great, but signal quality alone doesn’t make a great technology or a great instrument. Alongside Rigol’s SiFi technology is the capability to create more unique waveform types without having to build custom arbitrary waves. This includes the unique ability to build harmonic waves on the instrument front panel where the engineer describes the phase and amplitude of each harmonic element of the starting frequency. Figure 4 shows how an engineer can define a harmonic wave from the instrument’s front panel. Harmonic waves let the engineer set amplitude and phase values for the fundamental frequency up through the 8th harmonic. Traditionally, engineers who need signals which are more easily defined in RF space would have to define each frequency, amplitude, and phase and sum them together into an arbitrary wave. To create the wave in RF space the user would then have to resample the output in time domain with the correct sample spacing. This is a cumbersome way to generate and work with arbitrary waves. Harmonic waves are much easier to create. Simply define the power and phase at each frequency at a multiple of the fundamental and the instrument automatically combines them and plays them back. Figure 5 shows the matching spectrum to the signal defined in figure 4. Figure 6 is the same wave captured on a scope. This is the time domain arbitrary data a user would have to create, load, and configure on a traditional generator to get the same signal they can now quickly build from the front panel. With these new capabilities empowered by SiFi technology, the Rigol DG1000Z series waveform generators add significant power and flexibility to the engineer’s bench.

Figure 5: Harmonic Wave Spectrum Analyzer measurement

Figure 6: Harmonic Wave Oscilloscope measurement

 

 

 

 

 

 

 

 

 

 

Developing Powerful and Flexible Deep Memory Arbitrary Waveforms
The key technological advance of SiFi is the ability to deliver true point to point arbitrary waves. Without this capability arbitrary waves become notoriously difficult to generate accurately and require additional behind the scenes work by engineers slightly adjusting sampling and points to improve the overall signal fidelity. This task becomes considerably more difficult when using deep memory arbs that contain millions of points. With SiFi technology, engineers can create longer, more precise arbitrary waveforms. In the adjustable sample rate mode users can define a signal that will be output at up to 60 MSa/sec. With up to 16 Million points of memory depth, it is then possible to create completely custom point to point waveforms up to 250 milliseconds in length while still maintaining the full output sample rate. The traditional difficulty with working with such long waveforms is they are a challenge to edit. For instance, Microsoft Excel 2013 only allows just over 1 million rows of data. Using a DDS generator, to make a slight change to the playback period you need to either resample the wave or deal with artefacts created by the DDS phase based sample selections. With SiFi technology, you can leave the precise waveform as sampled and simply adjust the output sample rate. This saves the considerable time and effort of editing and reloading long waveforms to the instrument.
While SiFi makes arbitrary waves easier to manipulate and more flexible once they are created, users still need a reliable method of generating, editing, and loading long waveforms to their instrument the first time. SiFi enabled generators come with free UltraStation software for waveform editing. This tool enables importing, combining, and freehand editing of deep memory waves. Waveforms can then be loaded directly to the instrument over LXI or USB. In addition to the time domain, the editing software has a spectrum view to see the power and phase of the signal you created as shown in Figure 7. The combination of deep memory, SiFi technology, and enabling editing software empowers engineers to reproduce more flexible, more precise waveforms than traditional DDS technology alone.

Figure 7: Arbitrary waveform spectrum view in UltraStation software

 

 

 

 

 

 

 

 

 

 

 

Unprecedented Value
Rigol’s SiFi technology and the DG1000Z series waveform generators allow engineers to cover more signal reproduction applications than ever before with improved signal fidelity, flexibility, and ease of use. The deep memory capabilities and hardware design of the instruments work together with SiFi sampling technology to make these improvements possible and deliver unprecedented value to the engineer’s bench.

Products Mentioned In This Article:

  • DG1000Z Series please see HERE

SiFi 2 Technology and 16 Bit Resolution App Note

Posted on: June 17th, 2021 by Doug Lovell

Unique waveform reproduction technology

RIGOL’s newest arbitrary waveform generators, the DG800 and DG900 Series, combine high resolution output and advanced filtering techniques in point-to-point waveform generation expanding the value of arbitrary waveform generators in test platforms. The DG800 and DG900 (shown in Figure 1) each have our new 16 bit output capability which improves the accuracy of each step in a waveform. Customisable filtering gives you more options to define how these points go together. Without recreating the waveform points change the bandwidth of a signal by precisely defining the high frequency edges. This is the essence of SIFI II technology and it makes RIGOL’s new generators some of the most customisable generators available today.

Figure 1: Rigol DG900 Series Waveform Generator

 

 

 

 

 

 

 

High Resolution with 16 bits
Output resolution is an important characteristic of any signal generator. On modern, digital instruments resolution is measured in bits. Let’s look at a common example. Assume your signal generator has a 10 Vpp output. If the voltage level within that 10 volt swing is represented by a 14 bit number, that means that there are 2^14 or 16,384 possible output values. These are then distributed evenly so 10 volts divided by 16,384 means that each level is 610 microvolts from its neighbour. If the desired voltage level at a point in time is really between two of these levels then one level is selected as the closest option. This creates a small error in relation to the desired signal. Over time this error can make a substantial difference to the signal fidelity and ultimately to the response of the device under test.
With a 16 bit generator the same 10 volt swing is split into 2^16 or 65,536 sections. Since each of the additional bits has 2 states (0 or 1) there are 4 times as many states in a 16 bit number. With 4 times as many voltage settings available the error introduced is significantly reduced. But that isn’t the only effect. Since there are more voltage options the instrument is more often able to adjust the output taking advantage of its output rate.
Let’s take a look at how these two signals compare with an oscilloscope. Figure 2 shows the averaged waveforms from the 14 and 16 bit output. Figure 3 shows an arbitrary wave that we have created for this test. It is a standard 8192 point arbitrary wave that includes a slow ramp in voltage followed by a pulse to -5 V and then to 5 V.

The purpose of the pulse activity is to make certain the generators we test are setting their output range correctly for 10 Vpp. We loaded this same arbitrary wave into two instruments. One is a 14 bit generator and one is the new DG900 Series 16 bit generator. We synchronised the outputs and set each instrument to output 1 MSample per second to make the minimum step time 1 microsecond. We then captured the outputs of both generators on the oscilloscope using heavy averaging. This reduced the noise and enabled us to view the output steps of each instrument. The oscilloscope screen is the comparison shown in Figure 2.
The purple trace is the 14 bit generator. As you can see the output changes every 6 microseconds. Even though the output is set to change every microsecond the slow ramp in the arbitrary wave only moves enough in voltage every 6 steps to trigger an actual level change in the output.
The 16 bit channel in yellow is using the same arbitrary wave file. Because the output has a smaller voltage step size it makes smaller steps and also makes them more frequently since the voltage change requested in the waveform file is 4 times as likely to trigger a change. Under these test conditions the 16 bit generator updates the output about 4 times as often and each step has 4 times the resolution.
To analyse this data further we can request it from the scope and chart it in Excel (shown in Figure 4). Here we show the data extracted from the oscilloscope for the two channels as well as the ideal ramp line we were trying to emulate, all overlaid. As shown in the inset, the RMS error of each waveform is calculated in comparison to the ideal line. The 16 bit generator reduces the RMS error of the signal by a factor greater than 2 meaning that is contains less than half the RMS error.
In cases where accuracy and signal fidelity are important, the RIGOL 16 bit generators provide significantly more capability than traditional 14 bit generators.

Figure 2: 14 bit vs 16 bit Oscilloscope comparison

Figure 3: Sample Arbitrary Waveform

Figure 4: Comparison of emulated and ideal signals showing > 2x RMS error improvement with 16 bits

 

 

 

 

 

 

 

 

 

Flexibility in Filtering with SiFi II
While the 16 bit resolution improves signal fidelity, how an instrument moves between points in a waveform has a dramatic effect on both the time domain and RF domain view of a signal’s characteristics. Traditional generators employed DDS (Direct Digital Synthesis) technology which selects the best output point at any time based on phase. RIGOL’s SiFi technology, which was introduced on the DG1000Z series generators employs a true point-to-point output to decrease overall signal noise versus DDS.

The DG800 and DG900 Series generators are the first generators to employ SiFi II technology. These instruments utilise the point-to-point accuracy of SiFi and add filter customisation to the movement between points. This customisable setting provides flexibility for dynamic signal generation. Within the sequence menu users can select between interpolation, step, and smooth filtering. These filtering techniques change the look of the waveform in time and RF domains in ways that aren’t easy to duplicate without starting over with a new waveform on any other generator. Let’s look at a simple 1 kHz square wave. Using the standard square wave function in sequence mode utilises an 8,192 point square wave. In the sequence menu we set 8.192 MSamples per sec so that the wave repeats every 1 millisecond. Now, we can use the same wave amplitude and points in a point by point mode, but alter the output by adjusting the filtering. Figure 5 shows how the different filtering options appear on a spectrum analyser. Using a max hold trace we can see how much wideband noise is generated by each method.
Even though the primary signal is only at 1 kHz the square wave generates harmonics viewable out in MHz. By changing the filtering mode engineers can create a sharper drop-off indicative of a filtered or bandwidth limited signal path or a wide bandwidth footprint can be selected. This capability is very useful for looking at how signal conditioning and system design might affect the interpretation of the generated signal. Figure 6 shows the same 3 signals on an oscilloscope. The step filter creates a near ideal step response with limited overshoot. Consequently, there are fewer high frequency components present. The smoothing filter smooths the transitions but allows for some overshoot creating a different time domain look and a moderate amount of high frequency components. Interpolation mode creates a linear step. This step function has hard edge transitions that add significant high frequency components. The edge time in interpolation mode can be adjusted for further optimisation. We can see this in Figure 7. Here we used infinite persistence on the oscilloscope to show that the edge time can be set from 8 ns to about 90 ns in this configuration. This gives system engineers a tool for fine tuning signal response to verify their design parameters. With all of these filtering options the generated signal can be optimized to closely match whatever signal characteristics are required.

Figure 5: Comparison of SiFi II filter modes in the frequency domain

Figure 6: Comparison of SiFi II filter modes in the time domain

Figure 7: Range of the edge time in interpolation filter mode using SiFi II technology

 

 

 

 

 

 

 

Unprecedented Value
RIGOL’s SiFi II technology and the DG800 and DG900 series waveform generators allow engineers to more closely reproduce signals of interest with a combination of 16 bit resolution and point to point filtering options. Signals with complex RF footprints or high fidelity requirements can now be emulated more precisely with RIGOL’s SiFi II technology in the DG800 and DG900 series arbitrary waveform generators

Products Mentioned In This Article:

  • DG800 Series please see HERE
  • DG900 Series please see HERE
  • DG1000Z Series please see HERE

Advanced Embedded Debug with Jitter and Real-Time Eye Analysis

Posted on: June 17th, 2021 by Doug Lovell

Introduction

Debugging embedded communications is one of the most common tasks for electronic design engineers. Efficient Analysis of serial communications requires more than simple triggering and decoding but historically there has been a significant cost difference between the oscilloscopes with mixed signal, serial triggering, and serial decode capabilities and the high performance instruments with advanced analysis. Engineers need the ability to test long term signal quality characteristics including jitter and eye patterns without investing in high performance, high cost solutions.
The MSO8000 (Figure 1) provides the most complete analysis capabilities, deepest memory, and highest sample rate in its class. Built for embedded design and debug, the MSO8000 is designed to enable engineers to speed verification and debug of serial communications on a budget. Let’s look at how class leading sampling, memory, and analysis can be used debug complex signal quickly and easily with the help of Jitter and Eye Analysis.

Figure 1: The MSO8000 High Performance Oscilloscope

 

 

 

 

 

 

 

Characterizing Jitter
Clock precision is critical to high performance digital data transmission. Subtle changes in clock frequency affect error rates and data throughput, but these timing errors can’t be visualized easily in a traditional oscilloscope view. Jitter analysis can be done on these types of signals. Utilizing the high sample rate and the deep memory, the oscilloscope compares the changes in time between thousands of clock transitions. This makes it possible to visualize timing fluctuations below 100 picoseconds while also tracking changes in clock timing over long time periods.

One of the keys to visualizing jitter is the TIE or Time Interval Error. The Time Interval Error is the difference in time between the occurrences of the expected and the actual clock edge. There are two main visualization tools we use with TIE for debugging. First, is the TIE trend graph. This shows the accumulated error in time of the TIE values. This trend is a valuable debug tool since it highlights periodic types of jitter. Figure 2 shows the high speed clock signal on channel 1 (yellow) and the TIE Jitter trend in purple. The vertical axis units for the TIE trend shown here is 10 nanoseconds per division. The trend shows that the jitter TIE accumulates periodically. That implies a periodic signal or event is affecting the clock frequency. Next, investigate the TIE trend with measurements or cursors as shown in Figure 3. The cursors make it easy to view the period of the signal and calculate the frequency as well (1/DX). Direct measurements of the TIE Trend can also be made. The period of these changes are an important clue as to the root cause of any jitter issues.
In addition to the TIE trend you can also calculate the distribution of the TIE values. The shape and standard deviation of the TIE values is an important component of determining root cause. For the signal above the histogram is shown in Figure 4.
When using the TIE trend and distribution to debug and solve jitter related issues it is important to understand the nature of the TIE values and trend. Remember, that TIE is calculated as the accumulated changes in the period of the underlying signal. This means that the TIE graph looks like the integral of changes in the period. Therefore, the triangle wave shown in the figures so far represent a square wave change in the period. This is critical to understanding how to debug signal jitter. The TIE trend shows the period changes lengthening linearly (triangle rising) and then the period changes shortening linearly (triangle falling). When TIE is increasing linearly, the period must be longer than the expected period, but at a fixed value since the sum is linear. When TIE is decreasing linearly, the period is at a fixed value shorter than expected. Therefore, the period is changing between 2 fixed values at this frequency. One is just above and one is just below the expected period. We are therefore looking for a 10 kHz square wave that is somehow affecting our clock timing. We can learn from the histogram that this fluctuation appears to be constant as the TIE distribution is evenly and symmetrically spread across those values.

Figure 2: TIE trend graph showing periodic jitter

Figure 3: TIE trend graph cursor measurements

Figure 4: TIE trend and TIE Histogram

 

 

 

 

 

 

 

After testing different nearby signals on our device under test, we find a time correlated square wave (Figure 5) shown in blue at that frequency that is affecting our serial clock timing.
Jitter can often be caused by issues with PLLs, power fluctuations, or emissions, among others. Let’s look at some use cases where the histogram data is important to correct jitter analysis.
Figure 6 shows a bimodal distribution of the TIE values in the histogram with a sinusoidal TIE trend. Here we can see the standard deviation (sigma in the histogram statistics) is about 3.2 ns. Since these trends have a mean value of close to zero, the standard deviation can also be approximated as the RMS value of the signal (shown in the RMS measurement in the bottom left). Since both sinusoidal waves and triangle waves have an integral that appear visually sinusoidal it can be difficult to discern whether the underlying changes to the clock period are more triangular or sinusoidal. The standard deviation and the histogram are additional tools that can help to determine what signals might be interfering. Often, signal timing shows a sharper correction or snapback to the nominal timing as the clock is realigned. Visually this can look like more of a ramp or saw wave. The histogram can help to visualize the asymmetry if the signal drifts slowly and then is corrected quickly. Figure 7 is a good example of an asymmetrical jitter distribution that still appears nearly sinusoidal in the jitter TIE trend. A distribution like this makes it easier to pinpoint the process that might be causing the fluctuations.
One important key to jitter measurements is to remember that this is about data integrity and ultimately about errors that cost the system time or bandwidth. In other words, it isn’t just about how much the timing might fluctuate but how your receiver views the data. For this reason it is important to test the signal for jitter in the way that the receiver is also determining the clock settings. In serial communications the clock can be explicit, meaning that there is a clock line transmitted for this purpose. There may also be a constant clock speed defined by the communication standard. It is also common for the receiver to ‘recover’ the clock from the signal itself using a PLL circuit.

Figure 5: Finding root cause with Jitter analysis

Figure 6: Standard Deviation of the histogram

Figure 7: Asymmetry in the histogram results

 

 

 

 

 

 

 

The design of the receiver has an outsized effect on jitter and timing. If the receiver uses a constant clock rate at a 70 Mb/sec rate the jitter appears as shown in the figures above. If the receiver uses a 1st order PLL with 200 kHz of bandwidth it can eliminate much of the low frequency jitter we saw at 10 kHz. This is shown in Figure 8. The MSO8000 can emulate explicit, constant, 1st order PLL, and 2nd order PLL clock recovery systems to precisely measure the jitter or eye diagram as it will be seen by the receiver. These are important capabilities to accurately debug critical timing issues and ignoring insignificant issues. Once we are correctly emulating the clock recovery and have removed key causes of jitter, we can zoom in on the TIE Trend to 500 picoseconds per division (Figure 9). We still see some periodic fluctuations but they have been reduced significantly and may no longer have any impact on the bit error rate. Once, all the jitter has been adequately addressed in the system you can view noise sources below 500 picoseconds per division as shown in Figure 10. The MSO8000 Oscilloscope also enables a direct statistical table view of the TIE values as well as Cycle to Cycle values and values calculated from both the positive and negative widths. Together, these jitter tools make it possible to carefully visualize and analyse timing issues in vital serial communication links.

Figure 8: Dynamic Clock Recovery

Figure 9: Remaining Jitter

Figure 10: Jitter Measurements

 

 

 

 

 

 

Signal Quality & the Eye Diagram
Timing is only one of the characteristics that contribute to overall signal quality. The goal of all signal quality analysis is to reduce data error in the transceiver link. Errors are often caused by timing and clock issues, but problems stemming from bandwidth, grounding, noise, and impedance matching all can impact how a bit is interpreted by the receiver. The best method for visualizing the holistic data signal quality is the eye pattern or eye diagram test. Real-time eye diagrams are a great way to validate and debug serial data links where throughput and bit error rate are important to system performance.
The eye diagram analyses the data line aligning the bit timing with the recovered clock. The same clock recovery options are available here as in the jitter toolkit. They eye diagram is then created by lining up and overlaying each bit. A density plot is then created from what can be thousands of bit sequences. This is called an eye pattern or eye diagram because the shape in the center resembles an open eye that closes to a point on each side. The goal is to have an open eye where the bit level (0 or 1) is correctly interpreted at the center of the eye.

The eye pattern in Figure 11 shows some potential issues. Depending on the user settable thresholds, the instrument calculates the width and height of the eye. This signal has a bandwidth limitation. We can interpret that because the slope of the rising and falling edges on each side of the eye is not as steep as our design planned for. Analytically, we can determine this by comparing the eye height, eye width, and signal risetime on the screen to our design documents. There also appears to be some frequency uncertainty with regard to the recovered clock. We can see from the histogram that the distribution of the period is not Gaussian implying some non-random causality to the frequency shifts. Lastly, there is some noise causing the amplitude to fluctuate. This closes the eye vertically.
Using the eye diagram as a visual debug tool, evaluate the cables and connections. Also, look for layout, cross-talk, or other emissions that might be impacting signal quality. Once we remove the signal causing the frequency fluctuations we see the eye start to open in Figure 12. The purple histogram now shows that the remaining timing errors are at least symmetrical. This also improves the eye width in the eye measurements window.
Figure 13 is the result once we identified and removed the nearby noise source. This improves the eye height and width and the entire signal is more precise. Now it is clear that the rising and falling bit transitions don’t reach the same high or low level as the non-transitioning bits. We also see that the rising and falling edges themselves have about a 45° slope with these time and voltage settings. The design document indicates that this should be higher. This is likely a bandwidth issue that is both limiting the risetime of the transitions and causing the eye to close vertically when the signal does not return all the way to its peak or base by the middle of the bit.
Finally, Figure 14 demonstrates improved bandwidth after changing our transmitter circuit. The histogram distribution shows that this has also removed some of the outliers in the signal timing. The improved bandwidth shows clearly in the improved Risetime as well as a completely open vertical eye.

Figure 11: Eye Diagram of signal causing errors

Figure 12: Eye Diagram with improved timing

Figure 13: Eye Diagram with improved noise

Figure 14: Eye Diagram after debugging and improvements

 

 

 

 

 

 

 

 

Conclusions
Embedded design and debugging of digital data is a critical requirement in modern electronic products. Modern high performance digital oscilloscopes expand the analysis capabilities available to the everyday engineer’s bench. RIGOL’s UltraVision II technology and our MSO8000 Series oscilloscopes (Figure 15) complete these tools with jitter and eye diagram analysis options making complete signal quality analysis affordable and easy to use. Jitter and eye diagram analysis on the MSO8000 simplifies viewing, analysing, and resolving issues involving timing, noise, bandwidth, and overall signal quality in serial data links. With new analysis capabilities built on the deep memory and high sample rate of the UltraVision II platform, RIGOL’s MSO8000 Series oscilloscopes are the high performance debugging tool of choice for today’s embedded engineer.

Figure 15: The MSO8000 High Performance Oscilloscope 

 

 

 

 

 

 

Products Mentioned In This Article:

  • MSO8000 Series please see HERE

 

Bode Plot Analysis of switching Power Supplies

Posted on: June 16th, 2021 by Doug Lovell

Overview

A Bode plot is a graph that maps the frequency response of the system. It was first introduced by Hendrik Wade Bode in 1940.
The Bode plot consists of the Bode magnitude plot and the Bode phase plot. Both the amplitude and phase graphs are plotted against the frequency. The horizontal axis is lgω (logarithmic scale to the base of 10), and the logarithmic scale is used. The vertical axis of the Bode magnitude plot is 20lg (dB), and the linear scale is used, with the unit in decibel (dB). The vertical axis of the Bode phase plot uses the linear scale, with the unit in degree (°). Usually, the Bode magnitude plot and the Bode phase plot are placed up and down, with the Bode magnitude plot at the top, with their respective vertical axis being aligned. This is convenient to observe the magnitude and phase value at the same frequency, as shown in the following figure.

The loop analysis test method is as follows: Inject a sine-wave signal with constantly changing frequencies into a switching power supply circuit as the interference signal, and then judge the ability of the circuit system in adjusting the interference signal at various frequencies according to its output.
This method is commonly used in the test for the switching power supply circuit. The measurement results of the changes in the gain and phase of the output voltage can be output to form a curve, which shows the changes of the injection signal along with the frequency variation. The Bode plot enables you to analyse the gain margin and phase margin of the switching power supply circuit to determine its stability.

Principle

The switching power supply is a typical feedback loop control system, and its feedback gain model is as follows:

From the above formula, you can find out the cause for the instability of the closed-loop system: Given 1 + T(s) =0, the interference fluctuation of the system is infinite.

The instability arises from two aspects:

1) when the magnitude of the open-loop transfer function is:

2) when the phase of the open-loop transfer function is:

The above is the theoretical value. In fact, to maintain the stability of the circuit system, you need to spare a certain amount of margin. Here we introduce two important terms:

  •  PM: phase margin.
    When the gain |T(s)| is 1, the phase <T(s) cannot be -180° . At this time, the distance between <T(s) and  -180° is the phase margin. PM refers to the amount of phase, which can be increased or decreased without making the system unstable. The greater the PM, the greater the stability of the system, and the slower the system response.
  • GM: gain margin.
    When the phase <T(s)is -180°,  the gain |T(s)| cannot be 1. At this time, the distance between |T(s)| and 1 is the gain margin. The gain margin is expressed in dB. If |T(s)| is greater than 1, then the gain margin is a positive value. If |T(s)| is smaller than 1, then the gain margin is a negative value. The positive gain margin indicates that the system is
    stable, and the negative one indicates that the system is unstable.
    The following figure is the Bode plot. The curve in purple shows that the loop system gain varies with frequency. The curve in green indicates the variation of the loop system phase with frequency. In the figure, the frequency at which the GM is 0 dB is called “crossover frequency”.

The principle of the Bode plot is simple, and its demonstration is clear. It evaluates the stability of the closed-loop system with the open-loop gain of the system.

Loop Test Environment Setup

The following figure is the circuit topology diagram of the loop analysis test for the switching power supply by using RIGOL’s MSO5000 series digital oscilloscope. The loop test environment is set up as follows:
1. Connect a 5Ω injection resistor Rinj to the feedback circuit, as indicated by the red circle in the following figure.
2. Connect the GI connector of the MSO5000 series digital oscilloscope to an isolated transformer. The swept sine-wave signal output from the oscilloscope’s built-in waveform generator is connected in parallel to the two ends of the injection resistor Rinj through the isolated transformer.
3. Use the probe that connects the two analogue channels of the MSO5000 series digital oscilloscope (e.g. RIGOL’s PVP2350 probe) to measure the injection and output ends of the swept signal.

The following figure is the physical connection diagram of the test environment.

Operation Procedures

The following section introduces how to use RIGOL’s MSO5000 series digital oscilloscope to carry out the loop analysis. The operation procedures are shown in the figure below.

Step 1 To Enable the Bode Plot Function
You can also enable the touch screen and then tap the function navigation icon at the lower-left corner of the screen to open the function navigation. Then, tap the “Bode” icon to open the “Bode” setting menu.
Step 2 To Set the Swept Signal
Press Amp/Freq Set. to enter the amplitude/frequency setting menu. Then the Bode Set window is displayed. You can tap the input field of various parameters on the touch screen to set the parameters by inputting values with the pop-up numeric keypad. Press Var.Amp. continuously to enable or disable the voltage amplitude of the swept signal in the different frequency ranges.

The definitions for the parameters on the screen are shown in the following table.

Note:
The set “StopFreq” must be greater than the “StartFreq”.
Press Sweep Type, and rotate the multifunction knob to select the desired sweep type, and then press down the knob to select it. You can also enable the touch screen to select it.

  • Lin: the frequency of the swept sine wave varies linearly with the time.
  • Log: the frequency of the swept sine wave varies logarithmically with the time.

Step 3 To Set the Input/Output Source

As shown in the circuit topology diagram in Loop Test Environment Setup, the input source acquires the injection signal through the analog channel of the oscilloscope, and the output source acquires the output signal of the device under test (DUT) through the analog channel of the oscilloscope. Set the output and input sources by the following operation methods.
Press In and rotate the multifunction knob to select the desired channel, and then press down the knob to select it. You can also enable the touch screen to select it.
Press Out and rotate the multifunction knob to select the desired channel, and then press down the knob to select it. You can also enable the touch screen to select it.
Step 4 To Enable the Loop Analysis Test
In the Bode setting, the initial running status shows “Start” under the Run Status key. Press this key, and then the Bode Wave window is displayed. In the window, you can see that a Bode plot is drawing. At this time, tap the “Bode Wave” window, the Run Status menu is displayed. Under the menu, “Stop” is shown under the Run Status menu.

Step 5 To View the Measurement Results from the Bode Plot
After the Bode plot has been completed drawing, the Run Status menu shows “Start” again. You can view the Bode plot in the Bode Wave window, as shown in the following figure.

The following table lists the descriptions for the main elements in the Bode plot.

Press Disp Type and rotate the multifunction knob to select “Chart” as the display type of the Bode plot. The following table will be displayed, and you can view the parameters of the measurement results for loop analysis test.

Step 6 To Save the Bode Plot File
After the test has been completed, save the test results as a specified file type with a specified filename.
Press File Type to select the file type for saving the Bode plot. The available file types include “*.png”, “*.bmp”, “*.csv”, and “*.html”. When you select “*.png” or “*.bmp” as the file type, the Bode plot will be saved as a form of waveform. When you select “*.csv” or “*.html”, the Bode plot will be saved as a form of chart.
Press File Name, input the filename for the Bode plot in the pop-up numeric keypad.

Key Points in Operation

When performing the loop analysis test for the switching power supply, pay attention to the following points when injecting the test stimulus signal.
Selection of the Interference Signal Injection Location
We make use of feedback to inject the interference signal. Generally speaking, in the voltage-feedback switching power supply circuit, we usually put the injection resistor between the output voltage point and the voltage dividing resistor of the feedback loop. In the current-feedback switching power supply circuit, put the injection resistor behind the feedback circuit.
Selection of the Injection Resistor
When choosing the injection resistor, keep in mind that the injection resistor you select should not affect the system stability. As the voltage dividing resistor is generally a type that is at or above kΩ level, the impedance of the injection resistor that you select should be between 5Ω and 10Ω.
Selection of Voltage Amplitude of the Injected Interference Signal
You can attempt to try the amplitude of the injected signal from 1/20 to 1/5 of the output voltage.

If the voltage of the injected signal is too large, this will make the switching power supply be a nonlinear circuit, resulting in measurement distortion. If the voltage of the injected signal in the low frequency band is too small, it will cause a low signal-to-noise ratio and large interference.
Usually we tend to use a higher voltage amplitude when the injection signal frequency is low, and use a lower voltage amplitude when the injection signal frequency is higher. By selecting different voltage amplitudes in different frequency bands of the injection signal, we can obtain more accurate measurement results. MSO5000 series digital oscilloscope supports the swept signal with variable output frequencies. For details, refer to the function of the Var.Amp. key introduced in Step 2 To Set the Swept Signal.
Selection of the Frequency Band for the Injected Interference Signal
The frequency sweep range of the injection signal should be near the crossover frequency, which makes it easy to observe the phase margin and gain margin in the generated Bode plot. In general, the crossover frequency of the system is between 1/20 and 1/5 of the switching frequency, and the frequency band of the injection signal can be selected within this frequency range.

Experience

The switching power supply is a typical feedback control system, and it has two important indicators: system response and system stability. The system response refers to the speed required for the power supply to quickly adjust when the load changes or the input voltage changes. System stability is the ability of the system in suppressing the input interference signals of different frequencies.
The greater the phase margin, the slower the system response. The smaller the phase margin, the poorer the system stability. Similarly, if the crossing frequency is too high, the system stability will be affected; if it is too low, the system response will be slow. To balance the system response and stability, we share you the following experience:
● The crossing frequency is recommended to be 1/20 to 1/5 of the switching frequency.

● The phase margin should be greater than 45°. 45° to 80° is recommended.

● The gain margin is recommended to be greater than 10 dB.

Summary

RIGOL’s MSO5000 series digital oscilloscope can generate the swept signal of the specified range by controlling the built-in signal generator module and output the signal to the switching power supply to carry out loop analysis test. The Bode plot generated from the test can display the gain and phase variations of the system under different frequencies. From the plot, you can see the phase margin, gain margin, crossover frequency, and other important parameters. The Bode plot function is easy to operate, and engineers may find it convenient in analysing the circuit system stability.

Upgrade Method
Online Upgrade
After the oscilloscope is connected to network (if you do not have the access to the Internet, please ask the administrator to open the specified network authority) via the LAN interface, you can perform online upgrading for the system software.
1) Enable the touch screen and then tap the function navigation icon at the lower-left corner of the touch screen to enable the function navigation.
2) Tap the “Help” icon, and then the “Help” menu is displayed on the screen.
3) Press Online upgrade or enable the touch screen to tap “Online upgrade”, then a “System Update Information” window is displayed, requesting you whether to accept or cancel “RIGOL PRODUCT ONLINE UPGRADE SERVICE TERMS”. Tap “Accept” to start online upgrade. Tap “Cancel” to cancel the online upgrade.

For the local upgrade, please download the latest firmware from the following website and then perform the upgrade.

Products Mentioned In This Article:

  • MSO5000 Series please see HERE
  • PVP2350 please see HERE

Debug & Analysis of IoT Power Requirements

Posted on: June 16th, 2021 by Doug Lovell

Power and Function

The relationship between power and function in an Internet of Things (IoT) project is perhaps the most fundamental trade-off a design team needs to address; therefore, it must be made with definitive, testable goals from the customer’s perspective. Expert product development always begins with the an IoT development where the final product is a wrapper around the ‘big idea’, but it is all the more important because of it. It can be tempting to design a product around a battery the engineering team has used before or a display they know how to integrate. This design approach focuses on solutions the engineering team can visualise and not what would satisfy or delight the customer. Instead, viewed through a user’s lens, the product may need to work for a week while recharging only once overnight. It may be important to be on instantly when needed or it may work just as well to push a button to wake the device for use. A battery warning may be required or a sleep mode may be acceptable. The best way to truly understand the options is to start with a definition of as many customer use models as reasonable. Ultimately, there is a trade-off between size/weight and use time/energy but there are many choices for optimization and a testing framework can assist in these determinations.
Estimating Power Usage in Development
An important first step is always to estimate the power required to collect data, make decisions, and take responding action within the requirements of your device. Estimating this usage over time in a theoretical model from specifications of components is useful, but ultimately rigorous, iterative use case testing is important to really understanding your power needs. In a modern IoT platform this measurement may not be straight forward.
A low power System on Chip (SoC) for IoT development may specify a power draw for the low energy Bluetooth (BLE) radio of 5-10 mA but that isn’t transmitting constantly. Depending on the power modes available a device could use tens of milliamps in operating mode while consuming less than several microamps in a global system sleep mode. Additionally, with a lot of chip development focused in this area the state of the art is always changing.
Power Test Methodologies
Traditional electronics battery power consumption could be measured simply with a digital multimeter (DMM) monitoring the current draw over time. Today’s IoT platforms may be more complicated. Often, a pulsed draw that is too quick for a typical DMM to measure is utilized. This requires a faster measurement system to verify. The IoT platform may provide a current measurement test point. An oscilloscope typically uses this to measure the voltage around a small sense resistor in the battery circuit. Depending on the accuracy and resolution you need this may be an effective technique. For instance, if a 10 Ohm resistor is used then every mA results in 10 mV. With a typical oscilloscope noise floor near 1-2 mV this may be noisy. Another option would be to use a current probe to capture the signal. The noise performance may not be as good but the connections are significantly easier if no current measurement test point is provided.
After establishing the best measurement technique for your system, I prefer to begin with a static measurement to establish baseline performance. Typically, I would use a standard example program. In this case, we will view the current draw from a simple program that toggles several LEDs. In this mode, we measure a baseline of about 5 mA. As shown in figure 1, activating each LED also requires 10-12 mA of current on this baseline.
Second, establish a start-up or bootup power requirement for the system. Here, we conducted a test from a hard boot. In addition to power we are also monitoring the energy usage using the integration of voltage * current over time as an approximation. Refer to Figure 2. Understanding the start-up power requirements from different boot states is critical to optimizing the design for sleep or shutdown power mode usage.

 

 

 

 

 

 

 

 

 

Not all of the peripherals utilize power in a DC fashion like LEDs. Test other peripherals you are using to see how they pull power from the battery. A simple test of the SPI bus demonstrates power being drawn in a pulsed fashion. Analysing the amplitude, width, and repetition rate of these current pulses (shown in Figure 3) enables us to understand the power usage.
We can use a similar method to test the actual output power of the Bluetooth radio as well as the battery power used during the transmission. This is important once the complete RF antenna layout is completed since power shortages here could result from reflections or mismatch in the RF path. In the test, we leave the Bluetooth Low Energy radio in a constant power transmit to easily monitor the power consumption. In a real use case, the transmit function is never constant, but the power values here are a guide to optimization in on time and power for the radio use cases. In the results below, a transmit power of 0 dBm resulted in 83 mA of power usage while a transmit power of -40 dBm resulted in 67 mA of current. Even in an off state, the radio example uses significant power to prep the radio and peripherals. These baseline values help us to determine the radio power and transmit cycles that may fit into our customer use cases. RF and DC power are shown for these 2 cases in Figure 4.

 

 

 

 

 

 

 

 

 

 

 

 

Once we have determined typical power levels by state and peripheral usage, the next step is to verify that there are no other effects contributing to power usage when in a more dynamic mode of switching operational states. We can test our code examples using a function to both measure instantaneous power and energy over time. With this approach, we can now determine the energy usage of an approach over time and see how different sleep algorithms generally affect battery life. With this level of information, code optimization in response to customer needs becomes much simpler.
In order to complete the IoT design and get a product to market quickly and cost effectively, our information on power usage in different setups and modes is an invaluable tool. The completed tests have enabled us to gather information about static state power usage for our platform and our required peripherals. We also have details on sleep states and boot power from which we can make informed decisions about trade-offs between battery size and use times in our customer use models. With a basic understanding of how power is consumed in our system we can use this as a guideline for incremental improvements throughout our design cycle. For instance, we better understand the advantage of waiting to service the peripherals vs. putting the whole system into sleep mode for a short period. These decisions can then be validated and refined as your application and use cases become clear.
Conclusions and Key Learnings
When working in the fast-changing atmosphere of IoT design and development, reliable test methodologies become increasingly important. As engineers integrate the newest sensors and platforms on the fly to reach highly competitive markets as fast as possible, understanding core customer requirements and trade-offs and how those can be evaluated and compared throughout the development is an important step toward improving the strategic design process.
Whether the challenges of an application are more form or function, issues related to battery life and power usage are fundamental elements of design in the IoT ecosystem that play a significant role in market success. Establishing these principles early in the process and testing them iteratively is one of the best ways to limit budget and schedule risks in the latter stages of the design. Modern, easy to use test equipment that is more affordable than ever can be utilized to develop the limits and baselines that will guide an engineering team through a successful product development.

Products Mentioned In This Article:

  • Digital Multimeters please see HERE

Utilising Deep Memory with Rigol DS1000 Oscilloscope

Posted on: June 16th, 2021 by Doug Lovell

Long Memory Storage with Rigol Oscilloscopes

Some Digital scopes offer the capability of capturing waveforms in Long Memory mode. Utilising the long memory, Rigol scopes can capture complex signals in great detail over extended time periods. This allows an observer to examine high frequency effects within the captured waveform.
How to use Long Memory Mode on the DS1000D/E Series scopes?

  • Enter the Acquire menu by pressing the Acquire button

  • Set the Memory Depth field to Long Memory

Sampling rate and Long Memory:

Using the DS1000E and DS1000D Series Rigol digital scopes, users can access the long memory mode to get up to 512Kpts in dual channel mode and up to 1Mpts in single channel mode. In comparison, standard memory depth for oscilloscopes range from 1000pts – 16Kpts.
This high sampling rate is a big advantage when observing signals where the application requires capturing a longer waveform, but also needs to confirm higher frequency components within the signal.
The best way to visualise these higher frequency components is to use the zoom feature. By pushing the Horizontal knob, you can enter zoom mode. In this screen shot you see a 0.5 second waveform in yellow across the top. In the lower section we have zoomed in to 10 usec/division. Since we are sampling in normal mode you can see the distance between two data points. A straight line connects data points and you can see where the signal rises between samples and is clearly being linearly interpolated. On the
DS1000D/E Series scopes this normal mode operation has 16 Kpts per wave.
Since the overall wave is set to 50 msec/div the 10 divisions across the screen make the wave 0.5 seconds. 16 Kpts spread across 0.5 seconds means there is one sample approximately every 32 usec. This is confirmed by the screen shot.

 

 

 

 

 

 

Using the method above to switch the scope to deep memory mode we can see the increased sampling rate below. In this image the increased data rate is seen by observing the real shape of the rise time. Now with the increased sampling we have 1 million points over 0.5 seconds for one sample about every 500 nsec.

 

 

 

 

 

 

In conclusion, memory depth is an important feature to consider when selecting an oscilloscope because high frequency components can be important to the analysis, triggering, and monitoring of signals. Memory Depth combined with maximum sampling rates, waveform acquisition rates, and an oscilloscope’s display quality all impact the overall ease of capturing and analysing data with your oscilloscope.

Products Mentioned In This Article:

  • DS1000E Series please see HERE

Debug and Analysis Considerations for Optimising Signal Integrity in your Internet of Things Design

Posted on: June 16th, 2021 by Doug Lovell

Introduction

As the Internet of Things continues to expand to include applications including home automation, fitness, video, and tracking as well as traditional embedded electronics use cases the needs for testing and optimising these designs become clearer. Even as IoT expands, the test requirements have begun to crystallise into a set of capabilities that can assist a design team in achieving their goals. This is especially important given the smaller, more versatile teams and companies charting a path in IoT. As companies and engineers often new to large scale design work attempt to make new types of successful consumer products, IoT projects will continue to progress in fits and starts until designers find the optimal mix of form and function for each target audience creating reliability and annoyances like additional time required to charge the IoT device. By first considering test requirements as a method for evaluating these concerns, a design team can speed time to market and ultimately reduce the iterations required to create a successful product platform. Selecting the best test equipment for the task at hand also enables engineers to stay within their start-up budget requirements. Signal integrity is a key design aspects whose importance is magnified by these inherent trade-offs in the IoT landscape. Focusing on signal integrity as a key indicator of the optimal approach for an IoT application from debug through final design enables a small IoT design team to leverage their capabilities and ultimately reward their customers with performance and ease. Here we will be discussing important examples of signal integrity test methodologies for IoT devices and how they impact the user experience from form to function.

From and Reliability
It is crucial to consider how dramatically signal integrity and reliability are affected by mechanical implementation. The trend for IoT products, especially wearables, continues to be sleeker, lighter, and smaller. Optimising the form of the product for users is therefore often in conflict with reliability, ruggedness, and signal integrity requirements. Designs that strike a balance too close to the aesthetic ideal often fail to succeed for these core functional reasons. With more demand for waterproof and rugged designs that also integrate accelerometers and other sensors we create an environment on our device where signal integrity can often be compromised. This can lead to shorter battery life, user feedback delay, and even critical data or system faults.
Let us test a few examples looking for issues in communication on our SPI peripheral bus. The SPI bus is a common communication protocol for accelerometers, GPS chips, and many other sensor and actuators. Continuously monitoring a design to make certain reliable communication will be maintained to quality standards is an important test technique for this type of product development. In these tests, we will provide several methods for verifying and comparing SPI communication that can be used iteratively as your design approaches completion.
First, there are several aspects of serial communication that can be commonly affected by connections, layout, and stress or other aspects of long term use. These include bandwidth, noise, and impedance. Again, our first goal is to establish both a goal and a baseline for these parameters. The first step is to monitor and verify these signals with the analogue channels of an oscilloscope as shown in Figure 1.
In this test we can see that our connections appear to have more than enough bandwidth but we can also see some crosstalk noise especially on the data and chip select lines. The green bus values are an internal decode of that signal by the scope. As long as these are stable for each transmission then the noise is likely acceptable. Since crosstalk is a common issue here it can be important to test the bus while other peripherals are being serviced and activated asynchronously. One methodology is to utilise a pass/fail mask, which we have implemented in Figure 2, looking for unacceptable noise levels in a large sequence of traces.

Figure 1: Basic SPI Analysis with analogue oscilloscope channels

Figure 2: Pass / Fail Analysis to highlight crosstalk on a communication bus

 

 

 

 

 

 

 

One advantage to this approach is that masks can easily be saved and reused on different board revisions or code revisions within a project. Simple tests can therefore help to quickly identify the version or build that first showed an increase in noise or change in performance. It is important to internalise that the opposite can be true as well. Device performance may be so reliable that different layouts or smaller wires for connections may be possible creating more freedom for the overall mechanical design.
When there are issues with bandwidth or signal levels impedance may ultimately be the culprit. High impedance probing is made to deliver maximum voltage to the scope, but many serial buses have characteristic impedance such as 50 Ohms. In these cases, a valuable test is to transmit signals directly into a 50 Ohm probe setting on the oscilloscope. This makes it possible to visualise what the 50 Ohm receiving end sees during a transmission. If the impedance of the cabling, connections, or transmitter is off then the amplitude may droop or square waves may be misshapen. Regardless of the impedance of your lines, there are RC components that affect the risetime and overshoot of the transitions. Selecting the wrong termination resistors for your bus can cause extraneous power drain or poor transmission settling. System bandwidth and line impedance issues can be caused by these resistances as well as excessive or unstable capacitance. Like noise, these symptoms usually indicate physical hardware issues beyond basic valid data transmissions. When significant data errors or bugs do come into play another verification method can be used. To get a clearer image of how our IoT platform is interpreting this data we can use our mixed signal channels for comparison.
The digital channels on the oscilloscope (shown on the bottom of Figure 3) can be set with threshold values that mimic the bus controller on the IoT platform.

Figure 3: Comparison with mixed signal channels for data interpretation

 

 

 

 

 

 

 

The device does not see the analogue signals, but interprets their digital equivalent. Therefore, when having data issues it is important to look at both the digital and analogue representation. This enables engineers to quickly discover where data failures are occurring as well as the analogue root cause or noise source that may be contributing.
Lastly, one important consideration when implementing sensors in an IoT device is the latency between bus communication and action. Of course, this is highly dependent on both the platform and the code methodology and can change dramatically. Latency is the time it takes for the system or platform to interpret the peripheral data and take an action. As a latency baseline we ran a simple test that toggles an LED when a SPI read is completed. The cursor measurements on Figure 4 highlight this.

Figure 4: Latency measurement from SPI bus to LED on our IoT development board

 

 

 

 

 

 

 

From this baseline on your platform you can model how code changes impacts this latency and then you can base decisions about servicing these sensors based on the balance of use case and reliability requirements with battery life and implementation.
Latency, noise, bandwidth, and impedance all effect signal integrity and reliability. These key measurement techniques can be used throughout development, use case, and reliability testing to optimise the overall design and anticipate failure modes. As part of an overall IoT design strategy, continuous evaluation of signal integrity established baselines and limits helps speed time to market and customer acceptance with small impact to equipment budgets.
Conclusions and Key Learnings
When working in the fast changing atmosphere of IoT design and development reliable test methodologies become increasingly important. As engineers integrate the newest sensors and platforms on the fly to reach highly competitive markets as fast as possible, understanding core customer requirements and trade-offs and how those can be evaluated and compared throughout the development is an important step toward improving the strategic design process.
Whether the challenges of an application are more form or function, issues related to signal integrity and reliability are fundamental elements of design in the IoT ecosystem that play a significant role in market success. Establishing these principles early in the process and testing them iteratively is one of the best ways to limit budget and schedule risks in the latter stages of the design. Modern, easy to use test equipment that is more affordable than ever can be utilised to develop the limits and baselines that will guide an engineering team through a successful product development.

Products Mentioned In This Article:

  • DS1000Z Series please see HERE