By Adam Taylor, James Sloan, Tristan Cakebread, James Endicott

e2v

CCDs are used in many high performance imaging systems when quantum efficiency, dynamic range, dark signal, and read noise are the key driving requirements. CCDs operate in a parallel/serial structure, where each line of the image moves down in parallel and the line to be output clocks into a serial register for pixel-by-pixel readout. For best performance, these parallel and serial transfers require precise timing and precise overlaps because CCDs are, after all, analog components. Many existing CCD drive systems are based on a large-scale architecture, which limits their performance and flexibility. The product development team within the Space Imaging group at e2v have been developing a flexible proximity electronics core prototype to be used for both internal test and systems offerings. In this case, “proximity” means that the drive electronics are located very close to the CCD for better performance. This design architecture allows the main image processing to be performed further away by dedicated processing clusters once the image is safely in the digital domain.

The key driving requirements of this core are:

Software-Defined: Software configures the operating modes and the hardware using a defined stack very similar to the OSI 7-layer stack. Rapid conversion to the digital domain to significantly reduce the analog chain. Increase the abstraction level used to define CCD drive behavior. Generation highly deterministic clocks. Network-Enabled: Capable of high-speed communication via Gigabit Ethernet. Interface-Rich: communicate with many common embedded system interfaces (USB, RS-232, I2C, SPI, CAN, etc.) for use across many applications.

The development team chose to use the Xilinx Zynq-7000 SoC for the system prototype because of its support of numerous interface standards and its ability to distribute a design across the dual-core ARM Cortex-A9 MPCore processor and programmable logic (PL) to meet the design requirements. The Zynq SoC allowed the design team to focus on value-added areas: application software, embedded software, FPGA development, and the analog front-end design.

What do we mean by Software Defined?

There’s no clear standard that defines how the components that form a software-defined embedded system all come together. For example, there is no standardized assembly mechanism for collating the host-based application and the embedded-system software, hardware, COTS, FPGA, power supplies and so on. Therefore, the development team adopted the following model:

The Adopted Software-Defined Embedded System Model

Very similar to the OSI communications model, you need not provide all levels to create a working solution. Communication between levels and within levels use industry-standard interfaces such as AXI for the FPGA API level, SPI, I2C for the module level, and so on. Wherever possible, we avoid using custom interfaces and protocols because they reduce module reusability in other systems and they add unnecessary cost and risk to the schedule.

The Proximity Core Architecture

The architecture of the developed system, shown in the figure below, demonstrates how the application software residing on a local host configures the embedded system. Once configured, the embedded system—consisting of the SoM and the analog card—drives the CCD clocks and quantizes and packetizes the CCD pixel data into Ethernet frames. Due to the SoM’s I/O flexibility, we are also able to interface to other physical elements of the vision system including filters, shutters, etc.

CCD Proximity Core System-Level Architecture

Proximity Core Application Software

We developed the PC-based application software using Python. This software allows the user to define waveforms graphically. A screen shot of the prototype GUI appears below. The GUI makes it much easier to understand the waveforms being applied to the CCD versus the traditional text-based timer file, which can be difficult to debug and time consuming to generate.

Prototype waveform generation software GUI

Once the user is happy with the waveforms, the software programs the waveform generators within the Zynq SoC. Because of this design approach, you can easily update waveforms on the fly during device operation, which is great for trying out those what if scenarios.

The Zynq SoC Core

At the heart of the prototype—the Zynq SoC—runs the embedded software and implements the FPGA API levels, which performs the following functions:

Communication with the PC-based application software Configuring the waveform generators in the Zynq SoC’s PL Generating CCD drive waveforms Interface with the analog front end to quantize the CCD output pixels Apply Digital Correlated Double Sampling (DCDS) to determine the pixel value Transmit the image over Gigabit Ethernet

These tasks are split into two groups: the output waveform generation and the input video processing. Output waveform generation requires flexible, highly deterministic waveform generators. The development team used the Xilinx PicoBlaze microcontroller for the waveform generator. PicoBlaze is an 8-bit soft microcontroller core and each PicoBlaze instruction takes two clock cycles, resulting in highly deterministic behavior.

The PicoBlaze controller is entirely self-contained within the Zynq SoC’s PL. Its program is stored within internal dual-port BRAM. Using a dual-port BRAM to store the PicoBlaze program instead of a single-port BRAM allows the Zynq SoC’s ARM cores to write new programs on the fly to the PicoBlaze controller. The system’s PC-based application generates the PicoBlaze program and downloads it to the target without calling the usual PicoBlaze assembler.

Most of the image signal-processing chain uses standard components from the Vivado IP library but input video processing required the development of a custom ADC and DCDS IP blocks. Both standard and custom IP blocks employ AXI Lite interfaces to allow software-defined configuration during system customization and optimization.

Inter-block communication employs the AXI Streaming protocol, which permits easy data transfer to DDR memory using DMA via the Zynq SoC’s High-Performance AXI ports. Once the image is in DDR memory, it can be output to the Gigabit Ethernet connection using DMA, reduces the load on the ARM processors.

The embedded software running on the ARM processors has two main areas of responsibility: managing the PicoBlaze controller memories and transmission of the captured image over the Gigabit Ethernet connection.

One added advantage of using the Zynq is the internal XADC which can be used on the production versions to monitor the housekeeping parameters of the CCD and other embedded system components. This health information can be tagged onto the end of each transmitted image if needed.

The Analogue Front End (AFE)

Sadly, CCD devices lack easy-to-use, digital-compliant interfaces. They require an intermediate stage between the digital clock signals output by the Zynq SoC and the CCD’s clock inputs. An intermediate buffer stage amplifies the digital clock signals to the voltages required by the CCD—typically between 8V and 30V—and boosts current capacity as well because a CCD’s clock inputs can present heavy loads, depending on the CCD’s internal design.

The analog input chain scales the CCD’s analog pixel output so that its range is suitable for sampling by the ADC. Depending upon the type of CCD used, additional analog processing may be required such as differential-like subtraction because the CCD’s output is not a true differential signal.

The figure below shows the first output waveform captured from the prototype proximity electronics, in a very electrically and optically noisy lab:

First output waveform from the CCD in the lab

Future Considerations & Advanced Capabilities

Often it is necessary to very finely position the CCD output waveforms with respect to each other, particularly overlap alignment. Although the Zynq SoC can run at high frequencies, we may require finer, sub-nsec temporal control of these signals. While not included in this prototype, e2v has developed a technique to allow sub-nsec pulse positioning under the control of the Zynq SoC’s ARM processors. Combined with the approach described above, this additional control would provide for both coarse and fine timing adjustments to ensure optimal CCD operation.