advertisement
Ladybug5
USB 3.0 Spherical Camera
Technical Reference
Version 1.2
Revised 1/30/2013
Point Grey Research
®
Inc.
12051 Riverside Way • Richmond, BC • Canada • V6W 1K7 • T (604) 242-9937 • www.ptgrey.com
Copyright © 2013 Point Grey Research Inc. All Rights Reserved.
FCC Compliance
This device complies with Part 15 of the FCC rules. Operation is subject to the following two conditions:
(1) This device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesirable operation.
Hardware Warranty
Point Grey Research®, Inc. (Point Grey) warrants to the Original Purchaser that the Camera Module provided with this package is guaranteed to be free from material and manufacturing defects for a period of 2 Years. Should a unit fail during this period, Point Grey will, at its option, repair or replace the damaged unit. Repaired or replaced units will be covered for the remainder of the original equipment warranty period. This warranty does not apply to units that, after being examined by Point Grey, have been found to have failed due to customer abuse, mishandling, alteration, improper installation or negligence. If the original camera module is housed within a case, removing the case for any purpose other than to remove the protective glass or filter over the sensor voids this warranty. This warranty does not apply to damage to any part of the optical path resulting from removal or replacement of the protective glass or filter over the camera, such as scratched glass or sensor damage.
Point Grey Research, Inc. expressly disclaims and excludes all other warranties, express, implied and statutory, including, but without limitation, warranty of merchantability and fitness for a particular application or purpose. In no event shall Point Grey Research, Inc. be liable to the Original Purchaser or any third party for direct, indirect, incidental, consequential, special or accidental damages, including without limitation damages for business interruption, loss of profits, revenue, data or bodily injury or death.
WEEE
The symbol indicates that this product may not be treated as household waste. Please ensure this product is properly disposed as inappropriate waste handling of this product may cause potential hazards to the environment and human health. For more detailed information about recycling of this product, please contact Point Grey Research.
Trademarks
Point Grey Research, PGR, the Point Grey Research, Inc. logo, Blackfly, Bumblebee, Chameleon, Digiclops,
Dragonfly, Dragonfly Express, Firefly, Flea, FlyCapture, Gazelle, Grasshopper, Ladybug, Triclops and Zebra are trademarks or registered trademarks of Point Grey Research, Inc. in Canada and other countries.
Point Grey Ladybug5 Technical Reference
Table of Contents
Contacting Point Grey Research
1 Ladybug5 Specifications
1.1.2 Post Processing Workflow
1.3 Handling Precautions and Camera Care
1.4 Analog-to-Digital Converter
2 Ladybug5 Installation
2.1.1 Will your system configuration support the camera?
2.1.2 Do you have all the parts you need?
2.1.3 Do you have a downloads account?
2.2 Installing Your Interface Card and Software
2.4.1 Configuring Camera Drivers
3 Tools to Control the Ladybug5
3.1.1 Custom Applications Built with the Ladybug API
3.2 Using the LadybugCapPro Application
3.2.2 Working in the LadybugCapPro Main Window
3.2.2.3 Stream Navigation Toolbar
3.2.2.4 Stream Processing Toolbar
3.2.2.5 Image Processing Toolbar
3.2.2.7 LadybugCapPro Main Menu
3.2.2.8 LadybugCapPro Status Bar
3.2.3 Using the Camera Control Dialog
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
13
8
1
i
Point Grey Ladybug5 Technical Reference
3.2.3.5 Advanced Camera Settings
4 Ladybug5 Physical Interface
4.2.3 Using the Tripod Adapter
4.5 Camera Interface and Connectors
4.5.4 General Purpose Input/Output (GPIO)
5 General Ladybug5 Operation
5.2 User Sets (Memory Channels)
5.5.1 Determining Firmware Version
5.5.2 Upgrading Camera Firmware
6 Input/Output Control
6.1 General Purpose Input/Output (GPIO)
6.2.3 GPIO Mode 2: Asynchronous (External) Trigger
6.3 Programmable Strobe Output
6.5 12-Pin GPIO Electrical Characteristics
7 Image Acquisition
7.2.1 Standard External Trigger (Mode 0)
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
39
35
31
25
Point Grey Ladybug5 Technical Reference
7.2.2 Bulb Shutter Trigger (Mode 1)
7.2.3 Skip Frames Trigger (Mode 3)
7.2.4 Overlapped Exposure Readout Trigger (Mode 14)
7.2.5 Multi-Shot Trigger (Mode 15)
7.4 Camera Behavior Between Triggers
7.5 Changing Video Modes While Triggering
7.6 Asynchronous Software Triggering
7.7 Asynchronous Trigger Settings
7.8.1 Using GPS with the Ladybug API
7.8.2 Generating Google Maps and Google Earth data
8 Imaging Parameters
8.1 Pixel Formats, Frame Rates, and Image Sizes
8.2.3 JPEG Compression and JPEG Buffer Usage
8.8 Independent Sensor Control of Shutter, Gain and Auto Exposure
8.10 High Dynamic Range (HDR) Imaging
8.11 Embedded Image Information
9 Post Processing Control
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
65
51
Point Grey Ladybug5 Technical Reference
9.2.3 Rendering the Image for Display
9.2.4 Stabilizing Image Display
9.2.5 Adjusting Sphere Size for Stitching
9.2.8 Adjusting 12- and 16-bit Images
9.4 Viewing and Outputting Stream Files
10 Troubleshooting
10.3.1 Pixel Defect Correction
Appendix A: Ladybug API Examples
A.5 ladybugEnvironmentalSensors
A.13 ladybugProcessStreamParallel
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
82
79
Point Grey Ladybug5 Technical Reference
Appendix B: Stream File Format
B.5 JPEG Compressed Image Data Structure
B.6 Uncompressed Image Data Structure
Appendix C: Calibration and Coordinate System
C.1 Coordinate Systems on Ladybug Cameras
C.1.1 Lens 3D coordinate system
C.1.2 Sensor 2D coordinate system
C.1.3 Ladybug Camera Coordinate System
Revision History
103
96
89
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
Point Grey Ladybug5 Technical Reference
List of Tables
Table 4.1: USB 3.0 Micro-B Connector Pin Assignments
Table 6.1: GPIO pin assignments (as shown looking at rear of camera)
Table 8.1: Ladybug5 Supported image formats
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
Point Grey Ladybug5 Technical Reference
List of Figures
Figure 1.1: Ladybug5 capture workflow
Figure 1.2: Ladybug5 post processing workflow
Figure 1.4: 12-bit image corrected during post processing
Figure 4.1: Ladybug5 Dimensional Diagram
Figure 4.2: Desktop Mount (in mm)
Figure 4.3: Tripod Adapter (in mm)
Figure 4.4: IR filter transmittance graph
Figure 4.5: USB 3.0 Micro B Connector
Figure 6.1: Debouncer Filtering Invalid Signals
Figure 7.1: Trigger Mode 0 (“Standard External Trigger Mode”)
Figure 7.2: Trigger Mode 1 (“Bulb Shutter Mode”)
Figure 7.3: Trigger Mode 3 (“Skip Frames Mode”)
Figure 7.4: Trigger Mode 14 (“Overlapped Exposure/Readout Mode”)
Figure 7.5: Trigger Mode 15 (“Multi-Shot Trigger Mode”)
Figure 7.6: External trigger timing characteristics
Figure 7.7: Relationship Between External Triggering and Video Mode Change Request
Figure 7.8: Software trigger timing
Figure 8.1: Example Bayer Tile Pattern
Figure 9.1: Scene with Tilt Effect
Figure 9.2: Tilt-Adjusted Scene
Figure 9.3: Suggested Clicking Points to Specify Vertical Line Adjustment
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
Point Grey Ladybug5 Technical Reference
Contacting Point Grey Research
Contacting Point Grey Research
For any questions, concerns or comments please contact us via the following methods:
General questions about Point Grey Research
Technical support (existing customers only)
Knowledge Base Find answers to commonly asked questions in our
Knowledge Base
Downloads
Download the latest documents and software
Main Office
USA
Point Grey Research, Inc.
12051 Riverside Way
Richmond, BC, Canada V6W 1K7
Tel: +1 (604) 242-9937
Toll Free +1 (866) 765-0827
(North America only)
Fax: +1 (604) 242-9938
Email:
Tel: +1 (866) 765-0827
Email:
Europe and Israel
Point Grey Research GmbH Schwieberdinger
Strasse 60
71636 Ludwigsburg
Germany
Distributors
Japan ViewPLUS Inc
Korea Cylod Co. Ltd.
China LUSTER LightVision Tech. Co., Ltd.
Singapore, Malaysia &
Thailand
Voltrium Systems Pte Ltd.
Taiwan Apo Star Co., Ltd.
United Kingdom ClearView Imaging Ltd.
Tel: +49 7141 488817-0
Fax: +49 7141 488817-99
Email:
www.viewplus.co.jp
www.cylod.com
www.lusterlighttech.com
www.voltrium.com.sg
www.apostar.com.tw
www.clearviewimaging.co.uk
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
i
Point Grey Ladybug5 Technical Reference
Contacting Point Grey Research
About This Manual
This manual provides the user with a detailed specification of the Ladybug5 camera system. The user should be aware that the camera system is complex and dynamic – if any errors or omissions are found during experimentation, please contact us. (See
Contacting Point Grey Research on previous page
.)
This document is subject to change without notice.
All model- specific information presented in this manual reflects functionality available in the model's firmware version.
For more information see
Where to Find Information
Chapter
Appendix:
Appendix:
Appendix:
What You Will Find
General camera specifications and specific model specifications, and camera properties.
Instructions for installing the Ladybug5, as well as introduction to Ladybug5 configuration.
Information on the tools available for controlling the Ladybug5.
Information on the mechanical properties of the Ladybug5.
Information on powering the Ladybug5, monitoring status, user configuration sets, memory controls, and firmware.
Information on input/output modes and controls.
Information on asynchronous triggering and supported trigger modes.
Information on supported imaging parameters and their controls.
Information on image processing on the PC after capture.
Information on how to get support, diagnostics for the Ladybug5, and common sensor artifacts.
Sample programs provided with the Ladybug SDK.
Detailed information on stream files.
Information on translating 2D and 3D points.
Document Conventions
This manual uses the following to provide you with additional information:
A note that contains information that is distinct from the main body of text. For example, drawing attention to a difference between models; or a reminder of a limitation.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
ii
Point Grey Ladybug5 Technical Reference
Contacting Point Grey Research
A note that contains a warning to proceed with caution and care, or to indicate that the information is meant for an advanced user. For example, indicating that an action may void the camera's warranty.
If further information can be found in our Knowledge Base, a list of articles is provided.
Related Knowledge Base Articles
Title
Title of the Article
Article
Link to the article on the
Point Grey website
If there are further resources available, a link is provided either to an external website, or to the SDK.
Related Resources
Title
Title of the resource Link to the resource
Link
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
iii
Point Grey Ladybug5 Technical Reference
1 Ladybug5 Specifications
1 Ladybug5 Specifications
1.1
Image Processing Pipeline
1.1.1
Capture Workflow
The diagram below depicts the flow of data on the Ladybug5 during image capture. The table that follows describes the steps in more detail.
Figure 1.1: Ladybug5 capture workflow
Sensor
Image Flow Step Description
Each of the six Sony® ICX655 CCD sensors produces voltage signals in each pixel from the optical input.
Analog to Digital (A/D) Converter
Each sensor’s A/D Converter transforms pixel voltage into a 12- bit value, adjusting for gain in the process.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
1
Point Grey Ladybug5 Technical Reference
1 Ladybug5 Specifications
Image Flow Step
Pixel Correction
On Camera Processing
JPEG Compression
Storage Disk
Display
Description
The camera firmware corrects any blemish pixels identified during manufacturing quality assurance by applying the average value of neighboring pixels.
The amount of on camera processing performed is dependent on which pixel format was selected.
For 8-bit formats, gain, black level, white balance, and gamma are applied.
For 12- and 16- bit formats, exposure time and gain are optimized for highest bit depth. Gamma, white balance and other correction is performed during post capture. This ensures maximum dynamic range and flexibility since post capture can be reapplied indefinitely.
Image data accumulates in an on- camera frame buffer to perform JPEG compression. Following compression, image data is output in 8-, or 12-bit format via the USB 3.0 interface.
The image data is stored on the PC as stream files (.pgr format).
For the purpose of display, minor processing such as decompression and stitching is performed to allow for verification of the capture. Further processing may be performed depending on the available resources of the PC.
1.1.2
Post Processing Workflow
After capturing images, you can use the Ladybug API to perform the remaining tasks on the PC.
The diagram below depicts the flow of data on the Ladybug5 during post processing. The table that follows describes the steps in more detail.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
Figure 1.2: Ladybug5 post processing workflow
2
Point Grey Ladybug5 Technical Reference
1 Ladybug5 Specifications
Image Flow Step
Storage Disk
Decompression
Stitching
Fall Off Correction
Sharpening
Tone Mapping
Post Processing
Display
Output
Description
Image data from the capture workflow is stored on PC as stream files (.pgr).
Images are decoded back into raw image format for further processing.
By default, the stitching process assumes that all points in the field of view are 20 meters from the camera. This measure produces optimal results for most types of outdoor use.
Falloff Correction adjusts the intensity of light in images to compensate for a vignetting effect.
Image textures are sharpened. This effect may be most noticeable along texture edges.
The dynamic range of images is converted from high (HDR) to low (LDR) to resemble more closely the dynamic range of the human eye
The amount of post processing performed is dependent on which pixel format was selected.
For 8-bit formats, only the above processing is performed (stitching, fall off correction, sharpening, and tone mapping).
For 12- and 16- bit formats, in addition to the above, Bayer decoding, gain, black level, white balance, gamma, and EV compensation are available.
You can change the way images are rendered for display.
Final image files can be output in standard formats.
Raw versus Processed Images
Color Processing
The raw Bayer-tiled images are interpolated to create a full RGB images. For more information, see
. Following color processing, images are loaded onto the graphics card of the PC for rectification, projection and blending.
Rectification
Projection
Blending
White Balance
Gamma
Rectification corrects the barrel distortion caused by the Ladybug lenses.
Image textures are mapped to a single 2- or 3-dimensional coordinate system, depending on the projection that is specified.
Pixel values in each image that overlap with the fields of adjacent images are adjusted to minimize the effect of pronounced borders. The result is a single, stitched image.
Color intensities can be adjusted manually to achieve more correct balance. White Balance is ON by default. If not ON, no white balance correction occurs.
Gamma can be manually adjusted. By default gamma adjustment is OFF, and no correction occurs.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
3
Point Grey Ladybug5 Technical Reference
Figure 1.3: 12-bit raw image
Figure 1.4: 12-bit image corrected during post processing
For details see
Adjusting 12- and 16-bit Images
.
1 Ladybug5 Specifications
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
4
Point Grey Ladybug5 Technical Reference
1 Ladybug5 Specifications
1.2
Ladybug5 Specifications
MODEL
LD5-U3-51S5C-44R
LD5-U3-51S5C-44B
VERSION
Red
Black
MP
30 MP (5 MP x 6 sensors)
IMAGING SENSOR
n n n
Sony ICX655 CCD x 6, 2/3", 3.45 µm
Global shutter
2048 x 2448 at 10 FPS
A/D Converter
Video Data Output
Image Data Formats
Partial Image Modes
Image Processing
Shutter
All Ladybug5 Models
12-bit
8-, 12-, or 16-bit, Raw or JPEG compressed
Raw8, Raw12, Raw16 in uncompressed and JPEG
Pixel binning and region of interest (ROI) modes
Shutter, gain, white balance, gamma and JPEG compression, are programmable via software
Global shutter; Automatic/manual/one-push/extended shutter modes
0.02 ms to 2 seconds (extended shutter)
Gain
Gamma
White Balance
High Dynamic Range
Digital Interface
Transfer Rates
GPIO
External Trigger Modes
Memory Channels
Case
Dimensions
Mass
Power Consumption
Automatic/manual/one-push modes for 8-bit formats; manual mode for 12-bit formats
0 - 18 dB
0.50 to 4.00
Manual
Cycle 4 gain and exposure presets
USB 3.0 with locking screws for secure connection
5 Gbit/s
12-pin GPIO connector for external trigger input, strobe output, and camera power
Standard, bulb, skip frames, overlapped, and multi shot trigger modes
2 memory channels for custom camera settings
Machined aluminum housing, anodized red or black; single unit, water resistant
197 mm diameter, 160 mm height (with lens hoods)
3.0 kg
12-24 V, 13 W via GPIO
Machine Vision Standard
IIDC v1.32
Camera Control
via Ladybug SDK, CSRs, or third party software
Camera Updates
Optics
Field of View
Spherical Distance
Focus Distance
In-field firmware updates
6 high quality 4.4 mm focal length lenses
90% of full sphere
Calibrated from 2 m to infinity
~200 cm. Objects have an acceptable sharpness from ~60 cm to infinity
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
5
Point Grey Ladybug5 Technical Reference
1 Ladybug5 Specifications
Environmental Sensors
Temperature
Humidity
Compliance
Operating System
Warranty
All Ladybug5 Models
Temperature, Barometer, Humidity, Accelerometer, Compass, Gyroscope
Operating: 0° to 45°C; Storage: -30° to 60°C
Operating: 20 to 80% (no condensation) ; Storage: 20 to 95% (no condensation)
CE, FCC, RoHS
Windows 7 or Windows 8, 64-bit with 8 GB RAM
2 Years
1.3
Handling Precautions and Camera Care
Do not open the camera housing. Doing so voids the Hardware
Warranty described at the beginning of this manual.
Your Point Grey digital camera is a precisely manufactured and calibrated device and should be handled with care.
Here are some tips on how to care for the device.
n
Avoid electrostatic charging.
n
When handling the camera unit, avoid touching the lenses. Fingerprints will affect the quality of the image produced by the device.
n
To clean the lenses, use a standard camera lens cleaning kit or a clean dry cotton cloth. Do not apply excessive force.
n
Avoid excessive shaking, dropping or any kind of mishandling of the device.
To replace the protective glass the camera must be returned to Point Grey for servicing. Contact Support for more details.
Related Knowledge Base Articles
Title
Solving problems with static electricity
Cleaning the imaging surface of your camera
Article
Knowledge Base Article 42
Knowledge Base Article 66
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
6
Point Grey Ladybug5 Technical Reference
1 Ladybug5 Specifications
1.4
Analog-to-Digital Converter
The camera sensor incorporates an A/D converter to digitize the images produced by the CCD.
The Ladybug5's ADC, which digitizes the images, is configured to a fixed bit output (12-bit). If the
selected has fewer bits per pixel than the ADC output, the least significant bits are dropped. If the pixel format selected has greater bits per pixel than the ADC output, the least significant bits are padded with zeros.
The 12-bit conversion produces 4,096 possible digital image values between 0 and 65,520, left-aligned across a 2-byte data format. The four unused bits are padded with zeros.
The following table illustrates the most important aspects of the ADC.
Resolution
Black Level Clamp
Pixel Gain Amplifier
Variable Gain Amplifier
12-bit, 50 MHz
0 LSB to 255.75 LSB, 0.25 LSB steps
-3 dB to 6 dB, 3 dB steps
6 dB to 42 dB, 10-bit
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
7
Point Grey Ladybug5 Technical Reference
2 Ladybug5 Installation
2 Ladybug5 Installation
2.1
Before You Install
2.1.1
Will your system configuration support the camera?
Recommended System Configuration
Operating
System
CPU RAM Video Ports
Windows 7 or
Windows 8, 64-bit
3 GHz
Dual/Quad
Core
8 GB
NVIDIA
512 MB
USB
3.0
Software
Microsoft Visual Studio 2005 SP1 and SP1 Update for Vista (to compile and run example code using Ladybug SDK)
2.1.2
Do you have all the parts you need?
To install your camera you will need the following components, included with the Ladybug5: n n n n
USB 3.0 cable
12-pin GPIO 6-meter power cable and wiring harness
Tripod adapter and desktop mount (optional)
Interface card
Cables provided in the development kit are not high flex cables. Handle carefully during installation to avoid damaging the wires.
2.1.3
Do you have a downloads account?
The Point Grey downloads page has many resources to help you operate your camera effectively, including: n n n n
Software, including Drivers (required for installation)
Firmware updates and release notes
Dimensional drawings and CAD models
Documentation
To access the downloads resources you must have a downloads account.
1. Go to the Point Grey downloads page.
2. Under Register (New Users), complete the form, then click Submit.
After you submit your registration, you will receive an email with instructions on how to activate your account.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
8
Point Grey Ladybug5 Technical Reference
2.2
Installing Your Interface Card and Software
1. Install your Interface Card
Ensure the card is installed per the manufacturer's instructions.
Alternatively, use your PC's built-in host controller, if equipped.
2 Ladybug5 Installation
Open the Windows Device Manager. Ensure the card is properly installed under Universal Serial Bus Controllers. An exclamation point (!) next to the card indicates the driver has not yet been installed.
2. Install the Ladybug® Software
For existing users who already have Ladybug software installed, we recommend ensuring you have the latest version for optimal performance of your camera. If you do not need to install
Ladybug software, use the DriverControlGUI to install and enable drivers for your card. Ladybug5 requires Ladybug SDK v1.7+.
a. Login to the Point Grey downloads page.
b. From the Camera Family drop-down, select Ladybug5.
c. Click on the Software link to expand the results.
d. Under Ladybug SDK, click the 32- or 64-bit link to begin the download and installation.
After the download is complete, the Ladybug setup wizard begins. If the wizard does not start automatically, doubleclick the .exe file to open it. Follow the steps in each setup dialog.
3. Enable the Drivers for the card
During the installation, you are prompted to select your interface driver.
In the Interface Driver Selection dialog, select the I will use USB cameras.
This selection ensures the Point Grey pgrxhci (UsbPro) and pgrusbcam drivers are installed. For optimal performance, after setup, we recommend configuring the pgrxhci (UsbPro) driver on the host controller to operate directly with the camera.
To uninstall or reconfigure the driver at any time after setup is complete, use the DriverControlGUI
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
9
Point Grey Ladybug5 Technical Reference
2 Ladybug5 Installation
2.3
Installing Your Camera
1. Install a Mounting Bracket (optional)
a. Install a Tripod Adapter.
The tripod adapter attaches to the bottom of the camera.
The camera is also compatible with the Ladybug3 tripod adapter
(Part no. ACC-01-0013).
Note: the tripod adapter uses a 3/8" mounting hole which requires an adapter to
fit a standard tripod.
The tripod adapter is not used if using a desktop mount.
b. Install a Desktop Mount.
Thread the cables through the desktop mount and out the cable exit slot before attaching the mount to the camera.
The desktop mount is not used if using a tripod adapter.
2. Connect the interface Cable to the Camera
3. Connect the Camera to the interface Card
Plug the USB 3.0 cable into the host controller or hub.
Plug the USB 3.0 cable into the camera and secure with the cable jack screws.
Always connect the USB 3.0 cable to the camera before connecting to the host controller.
4. Plug in the GPIO connector
GPIO is used for power, trigger, and strobe.
The wiring harness must be compatible with a Hirose 12-pin female GPIO connector.
5. Confirm Successful Installation
From the Start menu, select All Programs > Point Grey Research > PGR Ladybug > LadybugCapPro.exe.
a. The Welcome dialog opens, and it will display a choice of starting a camera, or loading a previously recorded stream file. Select Start Camera.
b. The Select Camera dialog opens. This dialog allows you to view a list of all the currently connected Ladybug cameras, and select one to control.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
10
Point Grey Ladybug5 Technical Reference
2 Ladybug5 Installation
c. Ensure the camera is identified as USB 3.0. If the camera is identified as USB 2.0 it could indicate a bad cable connection or incorrect driver and the camera will not function properly.
d. To begin grabbing images, select a camera and click OK.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
11
Point Grey Ladybug5 Technical Reference
2 Ladybug5 Installation
2.4
Configuring Camera Setup
After successful installation of your camera and interface card, you can make changes to the setup. Use the tools described below to change the driver for your interface card.
For information on updating your camera's firmware post installation, see
.
2.4.1
Configuring Camera Drivers
Point Grey has created its own Extensible Host Controller Interface (xHCI) driver that is compatible with several USB
3.0 host controller chipsets. The PGRxHCI driver offers the best compatibility between the camera and host controller;
Point Grey recommends using this driver when using Point Grey USB 3.0 cameras.
Point Grey’s PGRxHCI driver does not support USB devices from other manufacturers.
Related Knowledge Base Articles
Title
Recommended USB 3.0 System Components
How does my USB 3.0 camera appear in Device Manager?
Article
Knowledge Base Article 368
Knowledge Base Article 370
To manage and update drivers use the DriverControlGUI utility provided in the SDK. To open the DriverControlGUI:
Start Menu-->All Programs-->Point Grey Research-->PGR Ladybug-->Utilities-->DriverControlGUI
Select the interface from the tabs in the top left. Then select your interface card to see the current setup.
For more information about using the DriverControlGUI, see the online help provided in the tool.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
12
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
3 Tools to Control the Ladybug5
The Ladybug5's features can be accessed using various controls, including: n
Ladybug SDK including API examples and n
LadybugCapPro application
Examples of the controls are provided throughout this document. Additional information can be found in the appendices.
3.1
Using Ladybug SDK
The user can monitor or control features of the camera through Ladybug API examples provided in the Ladybug SDK, or through the LadybugCap Program.
3.1.1
Custom Applications Built with the Ladybug API
The Ladybug SDK includes a full Application Programming Interface that allows you to create custom applications to control Point Grey Spherical Products. Included with the SDK are a number of source code examples to help programmers get started.
Ladybug API examples are in C++language and are also provided in a precompiled state. Two examples are provided in C# as well as C++. For more information, see
.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
13
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
3.2
Using the LadybugCapPro Application
The LadybugCapPro application provides an easy-to-use interface for controlling many functions of your Ladybug camera. LadybugCapPro consists of two primary interfaces: the
and the
Interface
Main Window
Camera Control Dialog
Functions
n n n n n
Control
settings, including color processing algorithm, falloff correction, blending width, projection,
and
from the camera and
into other formats.
Save individual panoramic images .
Record positional data from a GPS device
into a stream, and generate Google Maps or Google Earth files.
n n n n n n n n
Control settings such as brightness, gain, shutter, white balance and others .
Configure video mode and pixel format .
.
Operate the camera in
mode.
for trigger/strobe control.
Control each sensor independently
for shutter, gain and auto exposure.
Access
.
To start LadybugCapPro
To run LadybugCapPro from the Start menu, select Program Files > Point Grey Research Inc. > PGR Ladybug >
LadybugCapPro.exe.
3.2.1
Welcome Dialog
When LadybugCapPro starts, the Welcome dialog opens. You have a choice of starting a camera, or loading a previously recorded stream file.
Start Camera
If you choose to start a camera, the Select Camera dialog opens. This dialog allows you to view a list of all the currently connected Ladybug cameras across all buses, and select one to control and view images from. The dialog also lists basic information for each camera, such as the serial number and the current firmware version.
To begin grabbing images, select a camera and click OK. The
opens in live imagegrabbing mode.
To access the
prior to grabbing images, select a camera and click Configure Selected. After configuring the camera, close the Camera Control dialog and click OK to begin grabbing images.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
14
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
Load Stream File
If you choose to load a stream file, the Windows file explorer opens, allowing you to browse for a .pgr stream file to open. After selecting a stream file, the
opens in recorded stream mode.
3.2.2
Working in the LadybugCapPro Main Window
The Main Window is where you do most of your work in LadybugCapPro. After starting a camera or loading a stream file in the
, the Main Window opens and displays either a live video stream from the current camera or a previously-recorded video stream.
To magnify the display of toolbar icons for improved accessibility, click Settings -> Options on the menu. In the LadybugCapPro Options dialog, at bottom, click Use large icons. Then click OK.
In Stream File mode, the title bar of the main window contains the file path name, serial number, pixel format, and frame rate for the loaded stream file.
Functions in LadybugCapPro can be accessed via menus or toolbars.
3.2.2.1
Main Toolbar
Use the Main Toolbar for connecting to a new camera (or stream) or changing LadybugCapPro application settings.
Icon Description
Starts a new camera or loads a .pgr stream file. For more information, see
Allows you to set the following: n n n n n n
Options for communicating with your GPS receiver. See
JPEG Compression Quality--Controls the quality of images that are saved from a stream file in JPEG format. See
. We recommend the default setting of 85%. The increased file size and processing resources at higher settings may not be worth the minimal increase in quality.
Options for Google Map. See
Generating Google Maps and Google Earth data .
Stabilization - Adjusts Parameters for working with image stabilization. See
Dynamic Stitch properties used for auto and one shot dynamic stitch. See
Use large icons--Check to magnify the display of toolbar icons.
Copyright information about LadybugCapPro.
3.2.2.2
Live Camera Toolbar
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
15
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
The Live Camera Toolbar displays only if the camera is in live image-grabbing mode. Use this toolbar for the following functions: n n n n n n n n
Start or stop recording a stream.
Pause the grabbing of images from the camera.
Access the
Change the
of the images being outputted from the camera.
Perform a one-shot
auto-adjustment.
Enable/Disable
Select a
from Motion, Indoor, or Low Noise.
Select an
from Full, Bottom, or Top.
For more information, see
.
3.2.2.3
Stream Navigation Toolbar
The Stream Navigation toolbar displays only when a previously-recorded stream file is opened. Use this toolbar for navigating within a stream.
Toolbar
Control
Description
Opens a dialog for navigating to a specific frame.
A series of buttons for navigating through the frames of the video stream. Mouse over each button for an explanation. Alternatively, use the 'Jump to frame' icon or the Seek slider at bottom.
Click to play the stream file. Click again to pause.
Specifies the first frame from which to begin outputting the stream. Use the buttons at right, or the Seek slider at bottom, to navigate to the desired frame. Then click. If not specified, the stream outputs from the beginning frame.
Specifies the frame on which to stop the output. Use the buttons at right, or the Seek slider at bottom, to navigate to the desired frame. Then click. If not specified, the stream output ends at the final frame.
For more information, see
Viewing and Outputting Stream Files .
3.2.2.4
Stream Processing Toolbar
The Stream Processing toolbar display only when a previously-recorded stream file is opened. Use this toolbar for outputting the stream in a different format and resolution.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
16
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
Toolbar
Control
Description
Output
Type
Format
Output
Size
Sets the left keyframe from which to begin outputting the stream. Use the buttons, or the Seek slider at bottom, to navigate to the desired frame. Then click Mark left keyframe. If not specified, the stream outputs from the beginning frame.
Sets the right keyframe on which to stop the output. Use the buttons, or the Seek slider at bottom, to navigate to the desired frame. Then click Mark right keyframe. If not specified, the stream output ends at the final frame.
A drop-down list of formats for outputting the stream. For more information about these formats, see
.
The video or image format of the output. Output Type (see above) determines which formats are supported. If AVI or H.264 is selected and total output is greater than 2 GB, separate files are created for each sequential 2 GB section of the output.
A drop-down list of resolutions for outputting the stream. To specify a custom resolution, select Custom.
Click to start conversion. The Confirm Settings dialog opens for specifying an output directory for the output file.
After specifying all applicable settings, click Convert! to create the output file(s).
Click to temporarily stop converting. Click again to resume. If you want to cancel conversion after clicking, click
. Any images created before clicking during AVI conversion.
are saved to the directory you specify, including those created
Click to permanently stop converting. Any images created before clicking are saved to the directory you specify.
For more information, see
Viewing and Outputting Stream Files .
3.2.2.5
Image Processing Toolbar
The Image Processing Toolbar contains settings that are common to both the live camera and stream file modes. The controls on this toolbar are used to change the way images are processed and rendered. You can use this toolbar to change the color processing algorithm, panoramic viewing angle, panoramic mapping type, falloff correction, blending width, stabilization, sphere size, and color correction. Additionally, you can view a histogram of RGB values represented in the current image.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
17
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
Control
Color
Processing
Description
Specifies the algorithm that LadybugCapPro uses to convert raw Bayer-tiled image data to 24-bit RGB images.
Lower- quality algorithms can increase the LadybugCapPro display rate, and higher- quality algorithms can decrease the display rate.
Two additional algorithms are: n n
High Quality Linear on GPU: Same output as High Quality Linear, but better performance on
graphics cards with NVidia CUDA support.
Directional Filter: Highest quality output, but significantly better performance than Rigorous.
Falloff
Correction
Blending
Width
Enables or disables falloff correction, which adjusts the intensity of light in images to compensate for a vignetting effect. This control is off by default. To enable, check Enable Falloff Correction. Then, specify an attenuation value either by using the slider or entering a value in the textbox. The attenuation value regulates the degree of adjustment you want to apply. Then click OK.
Allows you to adjust the pixel width along the sides of each of the six images within which blending takes place prior to stitching. Blending is the process of adjusting pixel values in each image that overlap with the fields of adjacent images to minimize the effect of pronounced borders. The default width of 100 pixels is suitable for the 20- meter sphere radius to which Ladybug cameras are pre- calibrated. To change the sphere radius calibration, see below.
Image Type
Changes the way images are rendered. See
.
These controls affect video display only. To specify how images are rendered when outputting to video, specify an Output Type using the
Rotation
Angle
Specifies the orientation of the camera unit's six cameras to the projection. The default orientation is camera 0 projects to the front of the sphere and camera 5 to the upward pole (or top) of the sphere.
Mapping
Type
Specifies the mapping projection that dictates how the six individual pictures from each camera are stitched into a panoramic display—either Radial or Cylindrical. See
.
Image
Stabilization
Adjusts image display to compensate for the effect of unwanted movement across frames when the camera records on an unstable surface. See
.
Sphere Size
Allows you to change the sphere radius, in meters, to which images are calibrated for stitching panoramas. See
Adjusting Sphere Size for Stitching .
Image
Adjustment
Opens a dialog for performing color correction, sharpening, texture intensity adjustment and tone mapping.
See
and
Adjusting 12- and 16-bit Images .
Anti-
Aliasing
Minimizes sampling errors, especially in low-resolution images. From the Settings menu, select Enable Anti-
Aliasing.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
18
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
Control Description
Histogram
Displays a histogram of the values represented in the pixels of the current image.
Max Percent allows you adjust the graphical display to view a subset of percentage representation. For example, to view only the first 5% of the representation of values in the graph, enter '5' in the Max Percent field.
All Camera specifies that the values are compiled from all six cameras on the Ladybug system. To see values from only one camera at a time, select a camera. (For camera orientation, see Rotation Angle above.)
3.2.2.6
GPS Toolbar
The GPS Toolbar is used for starting or stopping a GPS device, as well as generating Google Map and Google Earth data when a stream file is loaded.
Icon Description
Instructs LadybugCapPro to begin receiving positional data from the GPS unit. When used in conjunction with
Stream Files , GPS data is saved with the stream file. For more information, see
. This control is not available in recorded stream mode. Click again to stop GPS recording.
Creates a Google Map file from the GPS data that was previously recorded with the stream file, and allows you the option to load it. An internet connection is required to view the file. Google Maps are saved as .html files in the bin folder of the PGR Ladybug installation directory. This control is not available in live image-grabbing mode.
Creates a Google Earth file from the GPS data recorded with the stream file, and allows you the option to load it. The
Google Earth application and an internet connection are required to view the file. Google Earth files are stored as .kml
files in the bin folder of the PGR Ladybug installation directory. This control is not available in image capture mode.
You can also export GPS NEMA data from a loaded stream file using the GPS menu.
For more information, see
.
3.2.2.7
LadybugCapPro Main Menu
In LadybugCapPro, most tasks represented in the top menu bar can can also be performed using the LadybugCapPro toolbars. Using the menu bar, you can accomplish the following additional tasks:
Saving Images
In both Live Camera and Stream File mode, you can save the current image to panoramic JPEG or panoramic bitmap format, or as six individual color-processed images, rectified or non-rectified. For more information, see
Downloading the Configuration File
You can download the file that calibrates the sphere radius for stitching panoramic images. To download this file to your 'My Documents' folder, select File > Save Configuration File. By default, images are stitched using a sphere radius of 20 meters. To change the sphere radius, see
Adjusting Sphere Size for Stitching
. For more information about sitching calibration, see Knowledge Base Article 250 .
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
19
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
Downloading the Alpha Mask File
You can download the alpha mask files that dictate pixel opaqueness during the blending stage of the stitching process.
To download, select File > Save Alphamask File. For more information about alpha mask files, see Knowledge Base
Article 250 .
Getting Help
You can get the following information from the Help menu: n n n
The SDK Help file.
LadybugCapPro copyright and version.
Information about the video card on the system that is being used with LadybugCapPro to render images.
3.2.2.8
LadybugCapPro Status Bar
The status bar at the bottom of the window displays different information, depending on which mode the application is in.
Live Camera mode
n
The first (left-most) status pane displays the status of the connection between LadybugCapPro and the camera unit. A red light here indicates a loss of image. Click on the red light to display event statistics with details.
n
The second status pane contains GPS positional information.
n
The third status pane shows the display rate, which is the rate at which images are being drawn to screen.
n
The third status pane shows the actual rate at which the images are being grabbed from the camera.
n
The final (right-most) status pane shows the rate by which image data is being transferred from the camera to the PC over the bus.
For more information, see
.
Stream File mode
n
The first (left-most) status pane contains information about the status of stream conversion.
n
The second status pane contains GPS positional information.
n
The third status pane shows the rate at which image conversion is processed.
n
The fourth status pane shows the index number of the current image being displayed, out of the total number of images in the stream.
n
The fifth status pane shows the current values of the left and right keyframes.
n
The final (right-most) status pane shows the shutter, gain and gamma settings under which the stream file was recorded.
For more information, see
Viewing and Outputting Stream Files .
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
20
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
3.2.3
Using the Camera Control Dialog
The Camera Control dialog allows you to control most Ladybug camera functions.
To include this dialog within a custom software application, link to pgrflycapturegui.lib and create a new CameraGUIContext within your application. Refer to the LadybugCap demo source code for an example of how to do this.
The following settings can be viewed or set using this dialog: n n n n n n n n n n n n
- For controlling settings such as Brightness, Exposure, Shutter, Gain and others.
- For specifying video mode, pixel format and packet size.
Camera Information - Provides information about the camera hardware and firmware.
- Provides direct access to camera registers.
- For configuring the general purpose input/output (GPIO) capabilities of the camera.
- For controlling memory channels, embedded image information and autoexposure range.
- Enables high dynamic range exposure.
Data Flash - Provides access to the camera's flash memory.
System Information - Provides information about the host system to which the camera is connected.
Bus Topology - Displays the network topology.
Help / Support - Information about downloading software and firmware updates, accessing the knowledge base, and opening a support ticket.
- For controlling JPEG compression and independent sensor control of exposure settings and auto-exposure statistics.
Some camera controls and formats may be greyed out. If a camera control is greyed out, this means that the camera does not support the function.
3.2.3.1
Camera Settings
The Camera Settings dialog allows the user to control settings such as Brightness, Exposure, Shutter, Gain and others.
For Ladybug5 users, access to parameters may be limited by which pixel format is in use. 8-bit images have more control during the image capture phase while 12- and 16-bit images have more control during post processing.
To open the Camera Settings dialog:
From the Settings menu, select Camera Control and click the Camera Settings tab.
3.2.3.2
Custom Video Modes
Shows information about the current video mode and pixel format of the camera, and allows you to configure packet size.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
21
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
You must click Apply for these settings to take effect.
Control
Mode/Pixel Format
Description
The video mode and pixel format in which the camera is running. You can also configure this with the
.
Format 7 Packet Size
Allows you to control the size of the packets sent by the camera. A higher packet size allows for a higher frame rate and larger image buffer size.
Related Knowledge Base Articles
Title
Why is the frame rate displayed in the demo program different from the required frame rate?
Ladybug JPEG image quality and buffer size settings
Article
Knowledge Base Article 182
Knowledge Base Article 288
3.2.3.3
Camera Registers
This dialog provides direct access to camera registers, and is therefore recommended for advanced users only. The camera register space conforms to IIDC specifications (see http://www.1394ta.org/ ).
For more information about camera registers, refer to the Point Grey Digital Camera Register Reference.
3.2.3.4
Trigger/Strobe
The GPIO/Trigger dialog provides control over the general purpose input/output (GPIO) capabilities of the camera, including the ability to configure: n n n n
Specific pins for input and output.
External trigger mode.
External trigger delay (or shutter delay when not in trigger mode).
Strobe pulse polarity, duration and delay.
Special output modes such as strobe signal pattern and PWM must be configured using the camera register.
Control Description
Enable/Disable trigger
When checked, allows the camera to respond to external triggers or internal software triggers.
Mode
Specifies the mode for how the camera responds to an external trigger. Not all modes are supported by all camera models.
Parameter
Certain trigger modes require a parameter to define the triggering cycle.
Trigger Source
Specifies which GPIO pin receives input from an external trigger device.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
22
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
Control
Trigger
Polarity
Specifies a low or high signal polarity.
Description
Trigger Delay
Fire Software
Trigger
Pin Direction
Control
Strobe Control
Delay (GPIO
0...n)
When checked, you can use the slider to specify the time delay, in seconds, from when an external trigger event occurs to the start of integration (when the shutter opens). When Trigger On/Off is unchecked, this value represents the shutter delay.
When clicked, causes a one-time internal (software-based) trigger to fire. Enable/Disable Trigger must be checked for the camera to respond.
Specifies whether the pin is configured for input or output. The Source pin cannot be configured as an output when Trigger On/Off is checked.
n n n
Enables a GPIO pin for strobe output.
Allows configuration of polarity and the period to delay assertion of the output strobe signal after start of exposure. Delay can be specified in ticks of your camera's clock and must be within the range of 0 to 4095.
Specifies the duration of the strobe output signal. If a value of 0 is entered, the duration is the same as the length of exposure. Duration can be specified in ticks of your camera's clock and must be within the range of 0 to 4095.
3.2.3.5
Advanced Camera Settings
The Advanced Features Dialog allows the control of advanced camera features including: n
n
n
Auto Range Control—Allows you to specify a range for exposure, shutter and gain that is narrower than the full range for the camera, when operating in auto-exposure mode. Use the
to set autoexposure.
3.2.3.6
Ladybug Settings
The Ladybug Settings dialog allows you to adjust JPEG compression and control exposure for each sensor independently.
Compression Control
JPEG Quality - Controls the JPEG compression rate of the compressor unit. Increasing the compression rate increases
JPEG image quality and, as a result, the amount of image data that is produced and collects in the image buffer.
Select Auto to set the compression rate to the maximum allowed by the image buffer. Auto JPEG Quality means the compression rate continually adjusts so that it never exceeds the amount of data allowed by the image buffer. Manual
JPEG Quality provides consistent compression however, the size of compressed image data may exceed the image buffer size, resulting in buffer size errors.
A JPEG Quality value between 80% and 95% is recommended, depending on your application's requirements. The visual improvement at higher than 95% is negligible and usually not worth the increased amount of data that is generated.
See
JPEG Compression and JPEG Buffer Usage
.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
23
Point Grey Ladybug5 Technical Reference
3 Tools to Control the Ladybug5
Auto buffer usage - When JPEG Quality - Auto is selected, you can use this slider to specify the percentage of the
image buffer that is used for JPEG-compressed image data. Specifying a value less than the maximum allows for room in the image buffer to accommodate extra image data, depending on scene variations from frame to frame. Increasing this value may result in an increase in the JPEG Quality setting. When JPEG Quality - Auto is not selected, the percentage of the image buffer that is used cannot be controlled.
A Buffer Usage setting between 80% and 95% is recommended.
Independent Sensor Control
This interface provides customized control of exposure for each of the six sensors independently for greater dynamic range. Independent Sensor Control is activated by any one of the following ways: n n
Selecting the Shutter or Gain On/Off control (the On/Off control for each sensor controls all sensors).
Deselecting the Shutter or Gain On/Off control on the
pane. n
Clicking the icon on the
.
When shutter or gain is selected in the Independent Sensor Control interface, the following options are available: n
When either shutter or gain is selected, auto exposure can be controlled manually or automatically for each sensor; OR n
When gain is selected, gain can be controlled manually or automatically for each sensor. When shutter is selected, shutter can be controlled manually or automatically for each sensor.
For best results, apply texture intensity adjustment and tone mapping during image processing. For more information, see
.
Sensors Used for Auto Exposure Statistics
When operating in auto exposure mode, you can control which camera sensors are used for calculating the settings of the auto exposure algorithm. For example, if you want all the sensors on the side of the camera to be used in this calculation, but not the top sensor, check boxes 0 through 4, and leave box 5 blank. Leaving all sensors unchecked is equivalent to checking all.
To set exposure in auto mode, use the
dialog.
Camera 0 is etched onto the camera housing. Camera 5 is the top sensor.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
24
Point Grey Ladybug5 Technical Reference
4 Ladybug5 Physical Interface
4.1
Ladybug5 Dimensions
4 Ladybug5 Physical Interface
Figure 4.1: Ladybug5 Dimensional Diagram
To obtain 3D models, contact [email protected]
.
4.2
Mounting
4.2.1
Using the Case
The case is equipped with five M4 X 0.7 mounting holes on the bottom of the case that can be used to attach the camera directly to the desktop mount, tripod adapter, or a custom mount.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
25
Point Grey Ladybug5 Technical Reference
4.2.2
Using the Desktop Mount
A desktop mount is provided with the camera.
Related Knowledge Base Articles
Title
Using the Ladybug in a mobile setting
Article
Knowledge Base Article 302
4 Ladybug5 Physical Interface
Figure 4.2: Desktop Mount (in mm)
4.2.3
Using the Tripod Adapter
A tripod adapter is provided with the camera. The tripod adapter has a 3/8" mounting hole which requires an adapter to fit a standard tripod.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
26
Point Grey Ladybug5 Technical Reference
4 Ladybug5 Physical Interface
Figure 4.3: Tripod Adapter (in mm)
4.3
Water and Dust Protection
To protect against dust and water, the Ladybug5camera housing includes a sealed layer of glass, with anti-reflective coating on both sides, over each of the six lenses.
Because the camera bottom contains outside interfaces, the camera should be operated in rainy weather only when connected to the desktop mount or the tripod adapter. The Ladybug5should not be submerged under water in any circumstances.
The Ladybug5 contains space to house a desiccant plug to reduce the risk of humidity damaging the camera. See
for the location on the bottom of the camera. The desiccant plug should be replaced periodically.
4.4
Infrared Cut-Off Filters
Point Grey color camera models are equipped with an additional infrared (IR) cut-off filter. This filter can reduce sensitivity in the near infrared spectrum and help prevent smearing. The properties of this filter are illustrated in the results below.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
27
Point Grey Ladybug5 Technical Reference
Figure 4.4: IR filter transmittance graph
The following are the properties of the IR filter/protective glass:
Type
Material
Physical Filter Size
Glass Thickness
Reflective
Schott D 263 T
15.5 mm x 18 mm
1.0 mm ±0.07 mm
Dimensional Tolerance ±0.08 mm
For more information, see
.
4 Ladybug5 Physical Interface
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
28
Point Grey Ladybug5 Technical Reference
4 Ladybug5 Physical Interface
4.5
Camera Interface and Connectors
4.5.1
USB 3.0 Connector
The camera is equipped with a USB 3.0 Micro-B connector that is used for data transmission, camera control and power. For more detailed information, consult the USB 3.0 specification available from http://www.usb.org/developers/docs/ .
Figure 4.5: USB 3.0 Micro B Connector
Table 4.1: USB 3.0 Micro-B Connector Pin Assignments
Pin Signal Name
1 VBUS
5
6
7
8
2
3
4
D-
D+
ID
GND
MicB_SSTX-
MicB_SSTX+
GND_DRAIN
9 MicB_SSRX-
10 MicB_SSRX+
Power
Description
USB 2.0 differential pair
OTG identification
Ground for power return
SuperSpeed transmitter differential pair
Ground for SuperSpeed signal return
SuperSpeed receiver differential pair
The USB 3.0 Micro-B receptacle accepts a USB 2.0 Micro-B plug and, therefore, the camera is backward compatible with the USB 2.0 interface.
When the camera is connected to a USB 2.0 interface, it runs at USB 2.0 speed, and maximum frame rates are adjusted accordingly based on current imaging parameters.
Related Knowledge Base Articles
Title
USB 3.0 Frequently Asked Questions
Article
Knowledge Base Article 357
4.5.2
Interface Cables
The USB 3.0 standard does not specify a maximum cable length.
The camera comes with a 5-meter USB 3.0 cable from Point Grey.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
29
Point Grey Ladybug5 Technical Reference
4 Ladybug5 Physical Interface
4.5.3
Interface Card
The camera must connect to an interface card. This is sometimes called a host adapter, a bus controller, or a network interface card (NIC).
In order to achieve the maximum benefits of USB 3.0, the camera must connect to a USB 3.0 PCIe 2.0 card.
The camera comes with a USB 3.0 PCIe 2.0 card from Point Grey.
4.5.4
General Purpose Input/Output (GPIO)
The camera has an 12-pin GPIO connector on the bottom of the case; refer to the diagram below for wire colorcoding. The GPIO is a Hirose waterproof 12-pin female connector (Mfg P/N:LF10WBP-12SD).
The camera comes with a 6-meter power cable and wiring harness with a Hirose 12-pin male connector (Mfg P/N:
LF10WBP-12P).
Diagram Pin Function Description
4
5
6
7
1
2
3
8
9
10
11
OPTO_GND
I0
O1
IO2
+3.3 V
GND
V
EXT
V
EXT
V
EXT
OPTO_GND
IO3
Ground for opto-isolated IO pins
Opto-isolated input (default Trigger in)
Opto-isolated output
Input/Output
Power external circuitry up to 150 mA
Ground for bi-directional IO, V
EXT
, +3.3 V pins
Allows the camera to be powered externally
Allows the camera to be powered externally
Allows the camera to be powered externally
Ground for opto-isolated IO pins
Input/Output
12 GND Ground for bi-directional IO, V
EXT
, +3.3 V pins
For more information on camera power, see
For more information on configuring input/output with GPIO, see
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
30
Point Grey Ladybug5 Technical Reference
5 General Ladybug5 Operation
5 General Ladybug5 Operation
5.1
Powering the Camera
The power consumption specification is: 12-24 V, 13 W via GPIO.
Power must be provided through the GPIO interface. For more information, see
. The required input voltage is 12 - 24 V DC.
The camera does not transmit images for the first 100 ms after power-up. The auto-exposure and auto-white balance algorithms do not run while the camera is powered down. It may therefore take several (n ) images to get a satisfactory image, where n is undefined.
When the camera is power cycled (power disengaged then re-engaged), the camera reverts to its default factory settings, or if applicable, the last saved memory channel. For more information, see
5.2
User Sets (Memory Channels)
The camera can save and restore settings and imaging parameters via on-board user configuration sets, also known as memory channels. This is useful for saving default power-up settings, such as gain, shutter, video format and frame rate, and others that are different from the factory defaults.
User Set 0 (or Memory channel 0) stores the factory default settings that can always be restored. Two additional user sets are provided for custom default settings. The camera initializes itself at power-up, or when explicitly reinitialized, using the contents of the last saved user set. Attempting to save user settings to the (read-only) factory default user set causes the camera to switch back to using the factory defaults during initialization.
The following camera settings are saved in user sets.
n n n n n n n n n
Acquisition Frame Rate and Current Frame Rate
Image Data Format, Position, and Size
Current Video Format
Camera power
Frame information
Trigger Mode and Trigger Delay
Imaging Parameters such as: Brightness, Auto Exposure, Shutter, Gain, White Balance, and Gamma
Input/output controls such as: GPIO pin modes, GPIO strobe modes
Color Coding ID/Pixel Coding
To access user sets:
n
During Capture—From the Settings menu, select Camera Control and click the Advanced Camera Settings tab.
Saving to or restoring from a memory channel should not be done while the camera is streaming.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
31
Point Grey Ladybug5 Technical Reference
5 General Ladybug5 Operation
5.3
Environmental Sensors
The camera provides sensors to report on current internal conditions of the camera. These environmental sensors are: n n n n n n
Temperature, in degrees Celsius ±2° C
Humidity, in percent of relative humidity ±3% RH
Air Pressure, in kilopascals ±6kPa
Accelerometer, in g-force ±50 mg
Gyroscope, not currently supported
Compass, in Tesla and degrees, ±3 uT or ±10 degrees when the camera is stationary
The environmental sensors provide general information only. If precise measurements are required for your application, external devices should be used.
Using LadybugCapPro:
From the Settings menu, select Environmental Sensors.
Using Ladybug API:
Use the
sample program.
5.4
Stream Files
Ladybug images are written to a set of Ladybug stream files. The size of each stream file is limited to 2 Gigabytes. The stream files are named as [Stream Base Name]-[Stream Serial Number].pgr. The [Stream Base Name] is defined by the user or the application. The [Stream Serial Number] is generated internally by the Ladybug library.
For example, if [Stream Base Name] is given as 'myStream', Ladybug stream writing API functions will name the stream files as follows: n n n n myStream-000000.pgr
myStream-000001.pgr
myStream-000002.pgr
Etc. ...
The [Stream Serial Number] always begins with 000000. All stream files that have the same [Stream Base Name] are considered as subsets of the same Ladybug stream.
When opening a Ladybug stream with a [Stream Base Name], the Ladybug API opens all the stream files that have the same [Stream Base Name] beginning with 000000.
The total number of images of a Ladybug stream is the sum of all the number of images in each stream file that has the same [Stream Base Name].
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
32
Point Grey Ladybug5 Technical Reference
The data in a stream file is written in the following sequence:
Name
Signature
Stream Header
Structure
Calibration Data
Image 0
Image 1
...
Image N-1
GPS Summary Data
Description
Ladybug Stream file signature
Information about the stream file
Camera calibration file
First Ladybug image
Second Ladybug image
...
Last Ladybug image
GPS summary data for the images in this stream file
5 General Ladybug5 Operation
For more information see
5.5
Camera Firmware
Firmware is programming that is inserted into the programmable read-only memory (programmable ROM) of most
Point Grey cameras. Firmware is created and tested like software. When ready, it can be distributed like other software and installed in the programmable read-only memory by the user.
The latest firmware versions often include significant bug fixes and feature enhancements. To determine the changes made in a specific firmware version, consult the Release Notes.
Firmware is identified by a version number, a build date, and a description.
Related Knowledge Base Articles
Title
PGR software and firmware version numbering scheme/standards
Determining the firmware version used by a PGR camera
Should I upgrade my camera firmware or software?
Article
Knowledge Base Article 96
Knowledge Base Article 94
Knowledge Base Article 225
5.5.1
Determining Firmware Version
To determine the firmware version number of your camera:
n
In LadybugCapPro, open the Camera Control dialog and click on the Camera Information tab.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
33
Point Grey Ladybug5 Technical Reference
5 General Ladybug5 Operation
5.5.2
Upgrading Camera Firmware
Camera firmware can be upgraded or downgraded to later or earlier versions using the UpdatorGUI program that is bundled with the Ladybug SDK available from the Point Grey downloads site .
Before upgrading firmware: n n
Install the SDK, downloadable from the Point Grey downloads site .
Download the firmware file from the Point Grey downloads site .
To open the UpdatorGUI:
Start Menu-->All Programs-->Point Grey Research-->PGR Ladybug-->Utilities-->UpdatorGUI
Select the camera from the list at the top. Click Open to select the firmware file. Then click Update.
Do not disconnect the camera during the update process.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
34
Point Grey Ladybug5 Technical Reference
6 Input/Output Control
6 Input/Output Control
6.1
General Purpose Input/Output (GPIO)
The camera has an 12-pin GPIO connector on the bottom of the case; refer to the diagram below for wire colorcoding. The GPIO is a Hirose waterproof 12-pin female connector (Mfg P/N:LF10WBP-12SD).
The camera comes with a 6-meter power cable and wiring harness with a Hirose 12-pin male connector (Mfg P/N:
LF10WBP-12P).
Diagram Pin
1
6
7
8
9
2
3
4
5
10
11
12
Table 6.1: GPIO pin assignments (as shown looking at rear of camera)
Function
OPTO_GND
I0
O1
IO2
+3.3 V
GND
V
EXT
V
EXT
V
EXT
OPTO_GND
IO3
GND
Description
Ground for opto-isolated IO pins
Opto-isolated input (default Trigger in)
Opto-isolated output
Input/Output
Power external circuitry up to 150 mA
Ground for bi-directional IO, V
EXT
, +3.3 V pins
Allows the camera to be powered externally
Allows the camera to be powered externally
Allows the camera to be powered externally
Ground for opto-isolated IO pins
Input/Output
Ground for bi-directional IO, V
EXT
, +3.3 V pins
Power must be provided through the GPIO interface. The required input voltage is 12 - 24 V DC.
For more information on camera power, see
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
35
Point Grey Ladybug5 Technical Reference
6 Input/Output Control
6.2
GPIO Modes
6.2.1
GPIO Mode 0: Input
When a GPIO pin is put into GPIO Mode 0 it is configured to accept external trigger signals.
6.2.2
GPIO Mode 1: Output
When a GPIO pin is put into GPIO Mode 1 it is configured to send output signals.
Do not connect power to a pin configured as an output (effectively connecting two outputs to each other). Doing so can cause damage to camera electronics.
6.2.3
GPIO Mode 2: Asynchronous (External) Trigger
When a GPIO pin is put into GPIO Mode 2, and an external trigger mode is enabled (which disables isochronous data transmission), the camera can be asynchronously triggered to grab an image by sending a voltage transition to the pin.
See
Asynchronous Triggering on page 40 .
6.2.4
GPIO Mode 3: Strobe
A GPIO pin in GPIO Mode 3 outputs a voltage pulse of fixed delay, either relative to the start of integration (default) or relative to the time of an asynchronous trigger. A GPIO pin in this mode can be configured to output a variable strobe pattern. See
Programmable Strobe Output on next page .
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
36
Point Grey Ladybug5 Technical Reference
6 Input/Output Control
6.3
Programmable Strobe Output
The camera is capable of outputting a strobe pulse off select GPIO pins that are configured as outputs. The start of the strobe can be offset from either the start of exposure (free-running mode) or time of incoming trigger (external trigger mode). By default, a pin that is configured as a strobe output will output a pulse each time the camera begins integration of an image.
The duration of the strobe can also be controlled. Setting a strobe duration value of zero produces a strobe pulse with duration equal to the exposure (shutter) time.
Multiple GPIO pins, configured as outputs, can strobe simultaneously.
Connecting two strobe pins directly together is not supported. Instead, place a diode on each strobe pin.
The camera can also be configured to output a variable strobe pulse pattern. The strobe pattern functionality allows users to define the frames for which the camera will output a strobe. For example, this is useful in situations where a strobe should only fire: n n n
Every Nth frame (e.g. odd frames from one camera and even frames from another); or
N frames in a row out of T (e.g. the last 3 frames in a set of 6); or
Specific frames within a defined period (e.g. frames 1, 5 and 7 in a set of 8)
Related Knowledge Base Articles
Title
Buffering a GPIO pin strobe output signal using an optocoupler to drive external devices
GPIO strobe signal continues after isochronous image transfer stops
Setting a GPIO pin to output a strobe signal pulse pattern
Article
Knowledge Base Article 200
Knowledge Base Article 212
Knowledge Base Article 207
6.4
Debouncer
By default, Point Grey cameras will reject a trigger signal that has a pulse width of less than 16 ticks of the pixel clock.
With the debouncer the user can define a debounce value. Once the debouncer is enabled and defined, the camera will reject a trigger signal with a pulse width less than the defined debounce value.
It is recommended to set the debounce value slightly higher than longest expected duration of an invalid signal to compensate for the quality of the input clock signal.
The debouncer is available on GPIO input pins. For the debouncer to take effect, the associated GPIO pin must be in
Input mode (GPIO Mode 0) . The debouncer works in all trigger modes, except trigger mode 3 Skip Frames.
Each GPIO has its own input delay time. The debouncer time adds additional delay to the signal on the pin.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
37
Point Grey Ladybug5 Technical Reference
6 Input/Output Control
Figure 6.1: Debouncer Filtering Invalid Signals
6.5
12-Pin GPIO Electrical Characteristics
Opto-isolated input pins require an external pull up resistor to allow triggering of the camera by shorting the pin to the corresponding opto ground (OPTO_GND). Non opto-isolated input pins are internally pulled high using weak pullup resistors to allow triggering by shorting the pin to GND. Inputs can also be directly driven from a 3.3 V or 5 V logic output.
The inputs are protected from over voltage. Non-isolated inputs are protected from both over voltage/over current and polarity.
When configured as outputs, each line can sink 25 mA of current. To drive external devices that require more, consult Knowledge Base Article 200 for information on buffering an output signal using an optocoupler.
The V
EXT pins (Pins 7, 8 and 9) allow the camera to be powered externally. The voltage limit is 12-24 V, and current is limited to 1.5 A.
The +3.3V pin (Pin 5) is limited to 150 mA by a fuse. External devices connected to Pin 5 should not attempt to draw higher current.
To avoid damage, connect the OPTO_GND pin first before applying voltage to the GPIO line.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
38
Point Grey Ladybug5 Technical Reference
7 Image Acquisition
7 Image Acquisition
7.1
Capturing Stream Files
Stream files are saved in the bin folder of the PGR Ladybug installation path.
When capturing stream files, there must be at least 2 GB of free space on the hard drive before writing to disk.
Using LadybugCapPro:
You can record stream files when you start a camera in Live Camera mode, using the controls in the Live Camera
Toolbar.
1. From the Settings menu select Camera Control, or click the button.
n
Use the Camera Control dialog to select your pixel format, trigger mode, and other imaging parameters such as brightness, shutter, gain, auto exposure range, and others.
Your selection of pixel format affects both resolution and frame rate. On
Ladybug5, for 12- and 16-bit images some parameters are deferred to post processing.
2. Use the
to
Using Ladybug API:
start, stop, and pause recording.
Example writing to disk using the Ladybug API:
1. Create a stream context (LadybugStreamContext) by calling ladybugCreateStreamContext().
2. Initialize the stream context for writing by calling ladybugInitializeStreamForWriting().
3. To write an image to disk, simply grab an image, and pass it to ladybugWriteImageToStream().
4. When all the writing is complete, call ladybugStopStream() to stop writing to disk.
5. Destroy the context by calling ladybugDestroyStreamContext() when suitable (such as program termination).
When used in conjunction with a
, you can record images to stream files when the GPS location changes after a specified distance. This feature is available using the
Ladybug API. For more information, see the
example.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
39
Point Grey Ladybug5 Technical Reference
7 Image Acquisition
7.2
Asynchronous Triggering
The camera supports asynchronous triggering, which allows the start of exposure (shutter) to be initiated by an external electrical source (or hardware trigger) or from an internal software mechanism (software trigger).
Model
Ladybug5 Supported Trigger Modes
Mode
All
All
All
All
All
Standard External Trigger (Mode 0)
Overlapped Exposure Readout Trigger (Mode 14)
7.2.1
Standard External Trigger (Mode 0)
Trigger Mode 0 is best described as the standard external trigger mode. When the camera is put into Trigger Mode 0, the camera starts integration of the incoming light from external trigger input falling/rising edge. The describes integration time. No parameter is required. The camera can be triggered in this mode by using the GPIO pins as external trigger or by using a software trigger.
It is not possible to trigger the camera at full frame rate using Trigger Mode 0; however, this is possible using
Overlapped Exposure Readout Trigger (Mode 14)
.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
Figure 7.1: Trigger Mode 0 (“Standard External Trigger Mode”)
40
Point Grey Ladybug5 Technical Reference
7 Image Acquisition
7.2.2
Bulb Shutter Trigger (Mode 1)
Also known as Bulb Shutter mode, the camera starts integration of the incoming light from external trigger input.
Integration time is equal to low state time of the external trigger input.
Figure 7.2: Trigger Mode 1 (“Bulb Shutter Mode”)
7.2.3
Skip Frames Trigger (Mode 3)
Trigger Mode 3 allows the user to put the camera into a mode where the camera only transmits one out of N specified images. This is an internal trigger mode that requires no external interaction. Where N is the parameter set in the Trigger Mode, the camera will issue a trigger internally at a cycle time that is N times greater than the current frame rate. As with Trigger Mode 0, the Shutter value describes integration time.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
Figure 7.3: Trigger Mode 3 (“Skip Frames Mode”)
41
Point Grey Ladybug5 Technical Reference
7 Image Acquisition
7.2.4
Overlapped Exposure Readout Trigger (Mode 14)
Trigger Mode 14 is a vendor-unique trigger mode that is very similar to Trigger Mode 0, but allows for triggering at faster frame rates. This mode works well for users who want to drive exposure start with an external event. However, users who need a precise exposure start should use Trigger Mode 0.
In the figure below, the trigger may be overlapped with the readout of the image, similar to continuous shot (freerunning) mode. If the trigger arrives after readout is complete, it will start as quickly as the imaging area can be cleared. If the trigger arrives before the end of shutter integration (that is, before the trigger is armed), it is dropped. If the trigger arrives while the image is still being read out of the sensor, the start of exposure will be delayed until the next opportunity to clear the imaging area without injecting noise into the output image. The end of exposure cannot occur before the end of the previous image readout. Therefore, exposure start may be delayed to ensure this, which means priority is given to maintaining the proper exposure time instead of to the trigger start.
Figure 7.4: Trigger Mode 14 (“Overlapped Exposure/Readout Mode”)
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
42
Point Grey Ladybug5 Technical Reference
7 Image Acquisition
7.2.5
Multi-Shot Trigger (Mode 15)
Trigger Mode 15 is a vendor-unique trigger mode that allows the user to fire a single hardware or software trigger and have the camera acquire and stream a predetermined number of images at the current frame rate.
The number of images to be acquired is determined by the parameter specified with the trigger mode. This allows up to 255 images to be acquired from a single trigger. Setting the parameter to 0 results in an infinite number of images to be acquired, essentially allowing users to trigger the camera into a free-running mode.
Once the trigger is fired, the camera will acquire N images with an exposure time equal to the value defined by the shutter, and stream the images to the host system at the current frame rate. Once this is complete, the camera can be triggered again to repeat the sequence.
Any changes to the trigger control cause the current sequence to stop.
During the capture of N images, the camera is still in an asynchronous trigger mode (essentially
Trigger Mode 14), rather than continuous (free-running) mode. The result of this is that the frame rate is turned OFF, and the camera put into extended shutter mode. Users should therefore ensure that the maximum shutter time is limited to 1/frame_rate to get the N images captured at the current frame rate.
Figure 7.5: Trigger Mode 15 (“Multi-Shot Trigger Mode”)
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
43
Point Grey Ladybug5 Technical Reference
7.3
External Trigger Timing
The time from the external trigger firing to the start of shutter is shown below:
7 Image Acquisition
1. Trigger Pulse
2. Propagation Delay
3. Exposure Time
4. Sensor Readout
5. Data Transfer
Figure 7.6: External trigger timing characteristics
It is possible for users to measure this themselves by configuring one of the camera’s GPIO pins to output a strobe pulse (see
Programmable Strobe Output on page 37
) and connecting an oscilliscope up to the input trigger pin and the output strobe pin. The camera will strobe each time an image acquisition is triggered; the start of the strobe pulse represents the start of exposure.
7.4
Camera Behavior Between Triggers
When operating in external trigger mode, the camera clears charges from the sensor at the horizontal pixel clock rate determined by the current frame rate. For example, if the camera is set to 10 FPS, charges are cleared off the sensor at a horizontal pixel clock rate of 15 KHz. This action takes place following shutter integration, until the next trigger is received. At that point, the horizontal clearing operation is aborted, and a final clearing of the entire sensor is performed prior to shutter integration and transmission.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
44
Point Grey Ladybug5 Technical Reference
7 Image Acquisition
7.5
Changing Video Modes While Triggering
You can change the video format and mode of the camera while operating in trigger mode. Whether the new mode that is requested takes effect in the next triggered image depends on the timing of the request and the trigger mode in effect. The diagram below illustrates the relationship between triggering and changing video modes.
Figure 7.7: Relationship Between External Triggering and Video Mode Change Request
When operating in trigger mode 0
or trigger mode 1
(page 41) , video mode change requests made before
point A on the diagram are honored in the next triggered image. The camera will attempt to honor a request made after point A in the next triggered image, but this attempt may or may not succeed, in which case the request is honored one triggered image later. In trigger mode 14
, point B occurs before point A. The result is that, in most cases, there is a delay of one triggered image for a video mode request, made before the configuration period, to take effect. In trigger mode 15
, change requests made after point A for any given image readout are honored only after a delay of one image.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
45
Point Grey Ladybug5 Technical Reference
7.6
Asynchronous Software Triggering
Shutter integration can be initiated by a software trigger.
The time from a software trigger initiation to the start of shutter is shown below:
7 Image Acquisition
1. Software Trigger
2. Trigger Latency
3. Exposure Time
4. Sensor Readout
5. Data Transfer
Figure 7.8: Software trigger timing
The time from when the software trigger is written on the camera to when the start of integration occurs can only be approximated. We then add the trigger latency (time from the trigger pulse to the start of integration) to this.
This timing is solely from the camera perspective. It is virtually impossible to predict timing from the user perspective due to latencies in the processing of commands on the host PC.
7.7
Asynchronous Trigger Settings
Using LadybugCapPro:
You can control the trigger in the Camera Settings:
1. From the Settings menu, select Camera Control, or click the
2. Click the Trigger / Strobe tab.
3. Under Trigger Control: n n n
Select Enable/Disable trigger.
Select a trigger mode from the drop-down list.
Enter a parameter.
button.
4. Under Trigger Delay: n
Select Enable/Disable delay.
n
Use the sliding scale or enter a value for the trigger delay. The trigger delay controls the delay between the trigger event and the start of integration (shutter open).
5. To use a software trigger, click the Fire Software Trigger button.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
46
Point Grey Ladybug5 Technical Reference
7 Image Acquisition
7.8
Working with GPS Data
You can use a GPS receiver in conjunction with a Ladybug camera to record GPS data with stream files, generate
Google Map or Google Earth files, and download a GPS data file.
You can record images to stream files when the GPS location changes after a specified distance.
This feature is available using the Ladybug API. For more information, see the
example.
When using a GPS receiver with your Ladybug, keep in mind the following: n
Your GPS receiver should have a serial or USB interface for connecting with your laptop and be able to stream
NMEA 0183 data in real time.
n
To provide reliable data, your GPS device should show a connection with at least 3 satellites.
n
It may take some time between when you first connect the GPS device to your PC and when it is recognized and configured for use with LadybugCapPro.
n
The following GPS NMEA data structures are supported: GPGGA, GPGSA, GPGSV, GPRMC, GPZDA, GPVTG and
GPGLL.
For information about how GPS data is incorporated into stream files, see
Configuring the GPS receiver
Before capturing GPS data, use the LadybugCapPro Options button ( settings for communicating with your GPS receiver.
Control
) on the
to specify some basic
Description
The port to which the GPS receiver is connected. To determine the port, expand the Ports node in the
Windows Device Manager.
Port Number
LadybugCapPro does not automatically detect this setting upon startup.
Baud Rate
Data Update
Interval
Start GPS when starting
LadybugCapPro
Google Map Height
/Google Map Width
The signaling event rate at which the GPS receiver communicates with the PC. This rate is limited by what the GPS unit supports. The NMEA 0183 standard supports the default value of 4800.
The time interval at which positional data is updated from the GPS to the PC. This rate can be set up to the maximum supported by the GPS unit. The default value is 1000 ms.
When checked, specifies that the GPS unit should transmit positional data as soon as the
LadybugCapPro application starts in live camera mode, using the existing settings.
Specifies the dimensions of the Google Maps that are generated. These dimensions affect the amount of area covered in the maps, rather than their resolution.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
47
Point Grey Ladybug5 Technical Reference
Using the GPS Toolbar
7 Image Acquisition
Once you have configured your GPS receiver, you are ready to use the GPS toolbar to record GPS data and generate
Google Map or Google Earth files.
Icon Description
Instructs LadybugCapPro to begin receiving positional data from the GPS unit. When used in conjunction with
Stream Files , GPS data is saved with the stream file. For more information, see
. This control is not available in recorded stream mode. Click again to stop GPS recording.
Creates a Google Map file from the GPS data that was previously recorded with the stream file, and allows you the option to load it. An internet connection is required to view the file. Google Maps are saved as .html files in the bin folder of the PGR Ladybug installation directory. This control is not available in live image-grabbing mode.
Creates a Google Earth file from the GPS data recorded with the stream file, and allows you the option to load it. The
Google Earth application and an internet connection are required to view the file. Google Earth files are stored as .kml
files in the bin folder of the PGR Ladybug installation directory. This control is not available in image capture mode.
Generating a GPS data file
You can download the data file containing the GPS data for each frame of a recorded stream file. From the GPS menu item, select Generate GPS/frame information. After the file is generated, a dialog box informs you of the location of the file.
7.8.1
Using GPS with the Ladybug API
For a code example, please see the
example. Examples can be accessed from:
Start Menu -> Point Grey Research -> PGR Ladybug -> Examples.
The Ladybug library has the ability to interface with a GPS device and insert NMEA sentence data into Ladybug images.
The data can then be extracted at a later time and be used to generate HTML data, which can be displayed as a Google
Map, or KML data, which can be loaded into Google Earth.
The NMEA sentences supported by the Ladybug library are: n n n n n n n
GPGGA
GPGSA
GPGSV
GPRMC
GPZDA
GPVTG
GPGLL
Detecting the GPS COM Port
Using the GPS functionality requires the use of a GPS device. The COM port that the GPS device is connected to must
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
48
Point Grey Ladybug5 Technical Reference
7 Image Acquisition
be known. To determine the port, perform the following steps: n n n
Right click on "My Computer".
Click on the Hardware tab and click the "Device Manager" button.
Expand the "Ports (COM & LPT)" node and note the COM port that the GPS device is mapped to.
Using the Ladybug API for GPS
The following steps provide a brief overview of how to use the GPS functionality of the Ladybug library:
1. Create a GPS context (LadybugGPSContext) by calling ladybugCreateGPSContext(). This may be done at the same time as the creation of the Ladybug camera context.
2. Register the GPS context with the Ladybug camera context by calling ladybugRegisterGPS(). A single
GPS context can be registered with several Ladybug camera contexts.
3. Initialize the device by calling ladybugInitializeGPS().
4. Start the GPS device by calling ladybugStartGPS(). This may be called when ladybugStart() is called. It takes about 5 seconds for the GPS data to become available.
5. Once image grabbing is active, there are several options for image grabbing. The options, with further explanations below, are: n
Getting NMEA data from a GPS device or LadybugImage
The functions ladybugGetGPSNMEAData or ladybugGetGPSNMEADataFromImage can be used to get a single NMEA sentence from a GPS device or LadybugImage. This is usually sufficient if only a small set of values are needed (for example, only latitude and longitude).
If all the sentences are required, calling ladybugGetAllGPSNMEAData or ladybugGetAllGPSNMEADataFromImage will populate a LadybugNMEAGPSData structure with all the supported NMEA sentences (if available).
Each NMEA structure has a boolean value called bValidData. This value is true only if the data contained in that structure is valid.
n
Getting GPS data from a LadybugImageInfo structure
When grabbing images in JPEG mode, a filled LadybugImageInfo structure is available in each
LadybugImage
. When the GPS functionality is active, the following values are populated: n n n dGPSAltitude dGPSLatitude dGPSLongitude
If any of these values are equal to LADYBUG_INVALID_GPS_DATA, then they should be considered invalid.
6. Once image grabbing has been completed, call ladybugStopGPS() to stop data acquisition from the GPS device.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
49
Point Grey Ladybug5 Technical Reference
7 Image Acquisition
7. Unregister the GPS context by calling ladybugUnregisterGPS().
8. Destroy the context by calling ladybugDestroyGPSContext().
7.8.2
Generating Google Maps and Google Earth data
The Ladybug library allows the user to retrieve GPS data from a stream file and automatically generate Google Maps or
Google Earth data, which can then be loaded in their respective applications.
Using LadybugCapPro:
From the GPS menu, select Generate Google Map HTML or click the button.
From the GPS menu, select Generate Google Earth KML or click the button.
Using Ladybug API:
If a stream context has already been initialized for reading, calling ladybugWriteGPSSummaryDataToFile with the relevant LadybugGPSFileType generates GPS data for the entire stream file.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
50
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
8 Imaging Parameters
8.1
Pixel Formats, Frame Rates, and Image Sizes
The Ladybug captures images in Format 7 custom image mode. The table below outlines the pixel formats that are supported. The implementation of these formats and the frame rates that are possible are subject to change across firmware versions.
Changing the size of the image or the pixel encoding format requires an undetermined length of frame times, including the stop/start procedure, tearing down/reallocating image buffers, write times to the camera, etc.
Table 8.1: Ladybug5 Supported image formats
Frame Rate
Pixel Format
Full
2448 x 2048
8
Half
2448 x 1024
16 Raw8
JPEG8 (Compressed)
JPEG8 (Uncompressed)
Raw12
10
8
5
16
16
10.5
JPEG12 (Compressed)
JPEG12 (Uncompressed)
10
5
16
10.5
Raw16* 4 8
*Due to the 12-bit
ADC , a 16-bit format is 12-bits padded with zeros.
Image Size
Full
2448 x 2048
30 MB
Half
2448 x 1024
15 MB
Variable
30 MB
45 MB
Variable
45 MB
60 MB
Variable
15 MB
22.5 MB
Variable
22.5 MB
30 MB
To maximize post processing and frame rate benefits, JPEG12 is the recommended format.
Ladybug sensors are arranged in "portrait" orientation to increase the vertical field of view. As a result, height measurements appear as width in the Ladybug SDK, and width measurements appear as height.
The image size accounts for six separate images captured by each of the camera’s six sensors prior to blending and stitching. Image size for JPEG compressed images is dependent on variables such as image composition and compression rate. For more information, see
JPEG Compression and JPEG Buffer Usage
.
Related Knowledge Base Articles
Title
Overview of multithreading optimizations in Ladybug library
Ladybug’s JPEG image quality and buffer usage settings
Article
Knowledge Base Article 264
Knowledge Base Article 288
Determining Image Size
For Ladybug5, the maximum size of a single camera image after image conversion is 2448 x 2048.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
51
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
If your software allocates its own memory for image conversion and texture updating, the amount of memory to be allocated is:
Image Size in MB = (Number of cameras x W x H x BPP) / 1000000
Bytes per pixel (BPP) is related to pixel format.
n n n
8-bit = 1 BPP
12-bit = 1.5 BPP
16-bit = 2 BPP
For example, the memory size allocation required for a JPEG8 image after conversion is:
Image Size = (Number of cameras x W x H x BPP) / 1000000
Image Size =
Image Size =
Image Size =
(6 x 2448 x 2048 x 1) / 1000000
30081024 / 1000000
30 MB
Determining Bandwidth
To calculate your bandwidth requirements, use your required resolution, frame rate, and pixel format as follows:
Bandwidth in MB/s = (Number of Cameras x W x H x FPS x BPP) / 1000000
For example, a Raw8 full size image at maximum frame rate would use the following bandwidth:
Bandwidth = (Number of Cameras x W x H x FPS x BPP) / 1000000
Bandwidth =
Bandwidth =
Bandwidth =
(6 x 2448 x 2048 x 8 x 1) / 1000000
240648192 / 1000000
241 MB/s
Determining Frame Rate
The theoretical frame rate (FPS) that can be achieved can be calculated as follows:
Frame Rate in FPS = (Bandwidth / (W x H x BPP)) / Number of Cameras
For example, assuming a Raw8 full size image, using 240 MB/s bandwidth, the calculation would be as follows:
Frame Rate = (Bandwidth / W x H x BPP)) / Number of Cameras
Frame Rate =
Frame Rate =
(240000000 / (2448 x 2048 x 1)) / 6
7.98 FPS
8.2
Pixel Formats
Pixel formats are an encoding scheme by which color or monochrome images are produced from raw image data.
Most pixel formats are numbered 8, 12, or 16 to represent the number of bits per pixel.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
52
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
The Ladybug5's
which digitizes the images, is configured to a fixed bit output (12-bit). If the pixel format selected has fewer bits per pixel than the ADC output, the least significant bits are dropped. If the pixel format selected has greater bits per pixel than the ADC output, the least significant bits are padded with zeros.
Pixel Format
Raw 8, JPEG 8
Raw 12, JPEG 12
Raw 16
Bits per Pixel
8
12
16
8.2.1
Raw
Raw is a pixel format where image data is Bayer RAW untouched by any on board processing. Selecting a Raw format bypasses the FPGA/color core which disables image processing, such as gamma/LUT and color encoding.
8.2.2
JPEG
JPEG is a pixel format which supports 16.7 million colors and follows a standard for compression by disposing of redundant pixels. The degree of compression can be adjusted allowing for a balance between image size and image quality.
8.2.3
JPEG Compression and JPEG Buffer Usage
When the camera operates in a JPEG-compressed imaging mode, the compressor unit processes image data based on a specified compression rate. Although specifying a higher JPEG quality value produces higher-quality images, more data must accumulate in the image buffer on the PC, increasing the risk of buffer overflow errors.
When JPEG compression is set to auto mode, compression quality adjusts automatically to the following parameters: n
The maximum allowed by the size of the image buffer on the PC (controlled by the camera driver).
n
Auto-buffer usage. This setting is the percentage of the image buffer that is used for image data, and is configurable when JPEG compression is in auto mode. Specifying a value less than the maximum (100%) allows for room in the buffer to accommodate extra images, depending on scene variations from frame to frame. A setting between 80%-95% is recommended. The visual improvement in compression quality that results from a setting higher than 95% is negligible compared to the increased amount of data generated.
To adjust the compression control:
1. From the Settings menu, select Camera Control, or click the
2. Click the Ladybug Settings tab.
3. Under Compression Control: button.
n
Select Auto for JPEG quality to specify auto compression. Then select an Auto Buffer Usage percentage from the sliding scale. A setting between 80 - 95% is recommended.
Or,
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
53
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
n
Deselect Auto JPEG Quality and select a fixed JPEG Quality percentage from the sliding scale. A setting between 80 - 95% is recommended.
Related Knowledge Base Articles
Title
Ladybug JPEG image quality and buffer size settings
Article
Knowledge Base Article 288
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
54
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
8.3
Shutter Type
8.3.1
Global Shutter
For cameras with a global shutter sensor, for each frame all of the lines start and stop exposure at the same time. The exposure time for each line is the same. Following exposure, data readout begins. The readout time for each line is the same but the start and end times are staggered.
Some advantages of global shutter are more uniform brightness and minimal motion blur.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
55
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
8.4
Brightness
Brightness, also known as offset or black level, controls the level of black in an image.
The camera supports brightness control.
To adjust brightness:
n
During Capture—From the Settings menu, select Camera Control and click the Camera Settings tab.
n
During Post Processing—From the Settings menu, select Image Processing and select Black Level to make an
adjustment with the slider. See
Adjusting 12- and 16-bit Images
.
8.5
Shutter Time
The Ladybug5 supports Automatic, Manual, and One Push control of the image sensor shutter time.
Shutter times are scaled by the divider of the basic frame rate. For example, dividing the frame rate by two (e.g. 15
FPS to 7.5 FPS) causes the maximum shutter time to double (e.g. 66 ms to 133 ms).
The supported shutter time range is 0.02 ms to 2 seconds (extended shutter).
The terms “integration”, “exposure” and "shutter" are interchangeable.
The time between the end of shutter for consecutive frames is always constant. However, if the shutter time is continually changing (e.g. being controlled by Auto Exposure), the time between the beginning of consecutive integrations will change. If the shutter time is constant, the time between integrations will also be constant.
The camera continually exposes and reads image data off of the sensor under the following conditions:
1. The camera is powered up; and
2. The camera is in free running, not asynchronous trigger, mode. When in trigger mode, the camera simply clears the sensor and does not read the data off the sensor.
The camera continues to expose images even when data transfer is disabled and images are not being streamed to the computer. The camera continues exposing images in order to keep things such as the auto exposure algorithm (if enabled) running. This ensures that when a user starts requesting images, the first image received is properly exposed.
When operating in free-running mode, changes to the shutter value take effect with the next captured image, or the one after next. Changes to shutter in asynchronous trigger mode generally take effect on the next trigger.
To adjust shutter:
n
During Capture—From the Settings menu, select Camera Control and click the Camera Settings tab.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
56
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
8.5.1
Extended ShutterTimes
The maximum shutter time can be extended beyond the normal range by disabling the frame rate. Once the frame rate is disabled, you should see the maximum value of the shutter time increase.
To enable extended shutter:
n
During Capture—From the Settings menu, select Camera Control and click the Camera Settings tab. Deselect
Frame Rate On/Off to disable the frame rate.
8.5.2
Shutter Range
The camera offers three preset shutter range modes to set the maximum shutter value: n
Motion—maximum shutter is set to as short as possible to prevent motion blur. Best used outdoors or images may be too dark. This is the default.
n
Indoor—maximum shutter is slightly longer than the motion mode, for use in indoor applications.
n
Low Noise—maximum shutter is not restricted.
To set the shutter range:
n
During Capture—From the
Live Camera Toolbar , make a selection from the Shutter range drop-down.
8.6
Gain
Gain is the amount of amplification that is applied to a pixel by the A/D converter. An increase in gain can result in a brighter image but also an increase in noise.
The Ladybug5 supports Automatic and One Push gain modes. The A/D converter provides a PxGA gain stage (white balance/preamp) and VGA gain stage. The main VGA gain stage is available to the user, and is variable from 0 - 18 dB.
Increasing gain also increases image noise, which can affect image quality. To increase image intensity, try adjusting the lens aperture (iris) and
time first.
To adjust gain:
n
During Capture—From the Settings menu, select Camera Control and click the Camera Settings tab.
n
During Post Processing—From the Settings menu, select Image Processing. Select Exposure and then select
Manual from the drop-down to adjust the Gain with the slider.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
57
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
8.7
Auto Exposure
Auto exposure allows the camera to automatically control shutter and/or gain in order to achieve a specific average image intensity. Additionally, users can specify the range of allowed values used by the auto-exposure algorithm by setting the auto exposure range, the auto shutter range, and the auto gain range.
Auto Exposure allows the user to control the camera system’s automatic exposure algorithm. It has three useful states:
State
Off
Manual Exposure Control
Auto Exposure Control
Description
Control of the exposure is achieved via setting both Shutter and Gain. This mode is achieved by setting Auto Exposure to Off, or by setting Shutter and Gain to Manual.
The camera automatically modifies Shutter and Gain to try to match the average image intensity to the Auto Exposure value. This mode is achieved by setting Auto Exposure to Manual and either/both of Shutter and Gain to Automatic.
The camera automatically modifies the value in order to produce an image that is visually pleasing. This mode is achieved by setting the all three of Auto Exposure, Shutter, and Gain to
Automatic. In this mode, the value reflects the average image intensity.
Auto Exposure can only control the exposure when Shutter and/or Gain are set to Automatic. If only one of the settings is in "auto" mode then the auto exposure controller attempts to control the image intensity using just that one setting. If both of these settings are in "auto" mode the auto exposure controller uses a shutter-before-gain heuristic to try and maximize the signal-to-noise ratio by favoring a longer shutter time over a larger gain value.
The auto exposure algorithm is only applied to the active region of interest, and not the entire array of active pixels.
There are four parameters that affect Auto Exposure:
Auto Exposure Range—Allows the user to specify the range of allowed exposure values to be used by the automatic
exposure controller when in auto mode.
Auto Shutter Range—Allows the user to specify the range of shutter values to be used by the automatic exposure
controller which is generally some subset of the entire shutter range.
Auto Gain Range—Allows the user to specify the range of gain values to be used by the automatic exposure controller
which is generally some subset of the entire gain range.
Auto Exposure ROI —Allows the user to specify a region of interest within the full image to be used for both auto
exposure and white balance. The ROI position and size are relative to the transmitted image. If the request ROI is of zero width or height, the entire image is used.
Auto exposure can be controlled on each of the six sensors independently. For more information, see
Sensor Control of Shutter, Gain and Auto Exposure on next page
.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
58
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
To control auto exposure:
n
During Capture—From the Settings menu, select Camera Control and click the Camera Settings tab.
n
During Post Processing—From the Settings menu, select Image Processing. Select Exposure and then select
Automatic from the drop-down to adjust with the slider. Select an ROI from the drop-down. See
.
8.7.1
Auto Exposure ROI
There are three preset modes for the auto exposure algorithm: n
Bottom 50%—uses only the bottom 50% of the five side cameras and excludes the top camera from its calculations.
n
Top 50%—uses only top 50% of the five side cameras and includes the top camera in its calculations. This is the upside down version of the first mode, used when the camera is mounted upside down (for example, on a helicopter).
n
Full Image—uses the entire image of all six cameras for its calculations. This is the default.
For 8-bit pixel formats, the auto exposure modes are set for image capture. For 12- and 16-bit pixel formats, the auto exposure modes are set both for image capture and post processing on the PC.
To select an auto exposure ROI:
During Capture:
n
From the
Live Camera Toolbar , make a selection from the AE ROI drop-down.
During Post Processing:
1. From the Settings menu, select Image Processing.
2. Select Exposure and then select Automatic from the drop-down to adjust with the slider.
3. Select an ROI from the drop-down.
8.8
Independent Sensor Control of Shutter, Gain and Auto
Exposure
The Independent Sensor Control feature provides customized control of exposure for each of the six cameras on the camera system independently. This feature allows users to acquire images with greater dynamic range of the overall scene.
Independent Sensor Control provides independent control of exposure-related features only.
This feature does not encompass other camera control settings such as gamma or white balance.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
59
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
Global control applies the same setting (automatic or manual) and value to all six cameras. Independent control allows separate settings (automatic or manual) and values to each camera.
n
If global shutter is off, shutter can be independently controlled.
n
If global gain is off, gain can be independently controlled.
n
If either global shutter or global gain is off, exposure can be independently controlled.
To control shutter, gain, and exposure:
During Capture:
n
From the Settings menu, select Camera Control, or click the n button.
Global control of shutter, gain, and exposure is set on the Camera Settings tab.
n
Independent control of shutter, gain, and exposure is set on the Ladybug Settings tab.
8.9
Gamma
The camera supports gamma functionality.
Sensor manufacturers strive to make the transfer characteristics of sensors inherently linear, which means that as the number of photons hitting the imaging sensor increases, the resulting image intensity increases are linear. Gamma can be used to apply a non-linear mapping of the images produced by the camera. Gamma is applied after analog-to-digital conversion and is available in all pixel formats. Gamma values between 0.5 and 1 result in decreased brightness effect, while values between 1 and 4 produce an increased brightness effect. By default, Gamma is enabled and has a value of 1.25. To obtain a linear response, disable gamma.
For 8-bit, gamma is applied as:
OUT = 255*(IN/255)^1/gamma
Related Knowledge Base Articles
Title Article
How is gamma calculated and applied?
Knowledge Base Article 391
To adjust gamma:
n
During Capture—From the Settings menu, select Camera Control and click the Camera Settings tab.
n
During Post Processing—From the Settings menu, select Image Processing. Select Gamma to adjust with the
slider. See
Adjusting 12- and 16-bit Images .
8.10
High Dynamic Range (HDR) Imaging
Generally speaking, digital camera systems are not capable of accurately capturing many of the high dynamic range scenes that they are exposed to in real world settings. That is, they may not be able to capture features in both the
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
60
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
darkest and brightest areas of an image simultaneously - darker portions of the image are too dark or brighter portions of the image are too bright. High Dynamic Range (HDR) mode helps to overcome this problem by capturing images with varying exposure settings. HDR is best suited for stationary applications.
The camera can be set into an HDR mode in which it cycles between 4 user-defined shutter and gain settings, applying one gain and shutter value pair per frame. This allows images representing a wide range of shutter and gain settings to be collected in a short time to be combined into a final HDR image later. The camera does not create the final HDR image; this must be done by the user.
The HDR interface contains gain and shutter controls for 4 consecutive frames. When Enable high dynamic range is checked, the camera cycles between settings 1-4, one set of settings per consecutive frame.
To enable HDR:
n
During Capture—From the Settings menu, select Camera Control and click the High Dynamic Range tab.
n
Ladybug SDK sample program—
Related Knowledge Base Articles
Title
Capturing HDR Images with Ladybug and Ladybug2
Article
Knowledge Base Article 116
8.11
Embedded Image Information
This setting controls the frame-specific information that is embedded into the first several pixels of the image. The first byte of embedded image data starts at pixel 0,0 (column 0, row 0) and continues in the first row of the image data: (1,
0), (2,0), and so forth. Users using color cameras that perform Bayer color processing on the computer must extract the value from the non-color processed image in order for the data to be valid.
Embedded image values are those in effect at the end of shutter integration.
Each piece of information takes up 32-bits (4 bytes) of the image. When the camera is using an 8- bit pixel format , this is 4 pixels worth of data.
The following frame-specific information can be provided: n n n n n n n n n
Timestamp
Gain
Shutter
Brightness
White Balance
Frame counter
Strobe Pattern counter
GPIO pin state
ROI position
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
61
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
The LadybugImageInfo structure in ladybug.h contains the format of the embedded information. This information is always provided at the start of the image.
If only Shutter embedding were enabled, then the first 4 bytes of the image would contain Shutter information for that image. Similarly, if only Brightness embedding were enabled, the first 4 bytes would contain Brightness information.
To access embedded information:
n
During Capture—From the Settings menu, select Camera Control and click the Advanced Camera Settings tab.
Interpreting Timestamp information
The Timestamp format is as follows (some cameras replace the bottom 4 bits of the cycle offset with a 4-bit version of the Frame Counter):
Cycle_count increments from 0 to 7999, which equals one second.
Second_count increments from 0 to 127.
All counters reset to 0 at the end of each cycle.
Interpreting ROI information
The first two bytes are the distance from the left frame border that the region of interest (ROI) is shifted. The next two bytes are the distance from the top frame border that the ROI is shifted.
8.12
White Balance
The Ladybug5 supports white balance adjustment, which is a system of color correction to account for differing lighting conditions. Adjusting white balance by modifying the relative gain of R, G and B in an image enables white areas to look "whiter". Taking some subset of the target image and looking at the relative red to green and blue to green response, the objective is to scale the red and blue channels so that the response is 1:1:1.
The user can adjust the red and blue values. Both values specify relative gain, with a value that is half the maximum value being a relative gain of zero.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
62
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
White Balance has two states:
State
Off
On/Manual
Description
The same gain is applied to all pixels in the Bayer tiling.
The Red value is applied to the red pixels of the Bayer tiling and the Blue value is applied to the blue pixels of the Bayer tiling.
The following table illustrates the default gain settings for most cameras.
Black and White
Color
Red
32
1023
Blue
32
1023
To adjust white balance:
n
During Capture—From the Settings menu, select Camera Control and click the Camera Settings tab.
n
During Post Processing—From the Settings menu, select Image Processing. Select White Balance to make
custom adjustments with the sliders, or select presets from the drop-down list. See
8.13
Bayer Color Processing
A Bayer tile pattern color filter array captures the intensity red, green or blue in each pixel on the sensor. The image below is an example of a Bayer tile pattern.
Figure 8.1: Example Bayer Tile Pattern
In order to produce color (e.g. RGB, YUV) and greyscale (e.g. Y8, Y16) images, color models perform on-board processing of the Bayer tile pattern output produced by the sensor.
Conversion from RGB to YUV uses the following formula:
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
63
Point Grey Ladybug5 Technical Reference
8 Imaging Parameters
To convert the Bayer tile pattern to greyscale, the camera adds the value for each of the RGB components in the color processed pixel to produce a single greyscale (Y) value for that pixel, as follows:
Y = R/4 + G/2 + B/4
To control Bayer color processing:
n
During Post Processing—Click the
button to select the algorithm used to convert raw Bayer-tiled image data to 24-bit RGB images. Lower-quality algorithms can increase the LadybugCapPro display rate, and higherquality algorithms can decrease the display rate.
Two additional algorithms are: n
High Quality Linear on GPU: Same output as High Quality Linear, but better performance on graphics
cards with NVidia CUDA support.
n
Directional Filter: Highest quality output, but significantly better performance than Rigorous.
Accessing Raw Bayer Data
The actual physical arrangement of the red, green and blue "pixels" for a given camera is determined by the arrangement of the color filter array on the imaging sensor itself. The format, or order, in which this raw color data is streamed out, however, depends on the specific camera model and firmware version.
Raw image data can be accessed programmatically via the pData pointer in the LadybugImage structure (e.g.
LadybugImage.pData). In Raw8 modes, the first byte represents the pixel at (row 0, column 0), the second byte at (row
0, column 1), etc. In the case of a camera that is streaming Raw8 image data in RGGB format, if we access the image data via the pData pointer we have the following: n n n n pData[0] = Row 0, Column 0 = red pixel (R) pData[1] = Row 0, Column 1 = green pixel (G) pData[1616] = Row 1, Column 0 = green pixel (G) pData[1617] = Row 1, Column 1 = blue pixel (B)
Related Knowledge Base Articles
Title
Different color processing algorithms
Writing color processing software and color interpolation algorithms
How is color processing performed on my camera's images?
Article
Knowledge Base Article 33
Knowledge Base Article 37
Knowledge Base Article 89
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
64
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
9 Post Processing Control
The options available for post processing control are dependent on the pixel format chosen during image capture. 12- and 16-bit formats have more post processing options than 8-bit formats.
Certain parameters can be adjusted after image capture during post processing, including: n n n n
Stabilization
Vertical tilt
Stitching
Image parameters such as black level, exposure, gamma, tone mapping, white balance
For 8-bit formats see
.
For 12- and 16-bit formats see
Adjusting 12- and 16-bit Images
.
9.1
Reading Stream Files
Using LadybugCapPro:
From the File menu, select New or click the button. Click Load Stream File. Select your file and click Open.
When LadybugCapPro is launched, it prompts you to start a camera or load a stream file.
Using Ladybug API:
The following steps provide a brief overview of how to use the stream functionality of the Ladybug library to read a stream from disk:
1. Create a stream context (LadybugStreamContext) by calling ladybugCreateStreamContext().
2. Initialize the stream context for reading by calling ladybugInitializeStreamForReading().
3. At this point, additional information about the stream can be obtained by calling ladybugGetStreamHeader and ladybugGetStreamNumOfImages.
4. If a specific image is required, calling ladybugGoToImage() will move the stream to the specified image.
Otherwise, ladybugReadImageFromStream() will retrieve the image from the current reading pointer.
5. When reading is complete, call ladybugStopStream() to stop reading.
6. Destroy the context by calling ladybugDestroyStreamContext() when suitable (such as program termination).
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
65
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
9.2
Working with Images
In both live camera and recorded video modes, you can use the
to change the way the camera processes and renders images. You can also click and drag inside the image display to render image rotation and magnification in different ways.
9.2.1
Falloff Correction
Falloff Correction adjusts the intensity of light in images to compensate for a vignetting effect. This control is disabled by default.
To enable fall off correction:
1. From the Imaging Processing toolbar click the
2. Select Enable falloff correction.
button, or, from the Settings menu select Falloff Correction.
3. Specify an attenuation value with the slider or textbox. The attenuation value regulates the degree of adjustment to apply.
4. Click OK.
9.2.2
Blending Width
Blending is the process of adjusting pixel values in each image that overlap with the fields of adjacent images to minimize the effect of pronounced borders. The default width of 100 pixels is suitable for the 20-meter sphere radius to which Ladybug cameras are pre-calibrated. To change the sphere radius see
Adjusting Sphere Size for Stitching .
The blending width control allows you to adjust the pixel width along the sides of each of the six images within which blending takes place prior to stitching.
To modify the blending width:
1. From the Imaging Processing toolbar click the
2. Specify a blending width with the slider or textbox.
button, or, from the Settings menu select Blending Width.
3. Click OK.
9.2.3
Rendering the Image for Display
These controls affect video display only. To specify how images are rendered when outputting to video, specify an Output Type using the
You can change the way images are rendered for display. See
for detailed information.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
66
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
To modify the image display:
1. From the Imaging Processing toolbar click the list.
button, then select an image type from the drop-down
Display Description
Panoramic
Renders the image as a panoramic projection. This is the default display. Use Mapping Type to specify either a radial or cylindrical projection.
Spherical 3D Renders the image as a 3-dimensional spherical projection.
Dome
Projection
Renders the image as a dome projection.
All-Camera Six images from each camera are rendered separately, unstitched.
Single-
Camera
(Raw)
Single-
Camera
(Rectified)
Image data from a selected camera is displayed.
Image data from a selected camera is displayed and rectified to account for lens distortion.
Rectification is the process of generating an image that fits a pin-hole camera model.
2. For the Rotation Angle, from the Imaging Processing toolbar click the button, then select from the dropdown list. The rotation angle specifies the orientation of the camera unit's six cameras to the projection. The default orientation is camera 0 projects to the front of the sphere and camera 5 to the upward pole (or top) of the sphere.
3. For the Mapping Type, from the Imaging Processing toolbar click the button, then select radial or cylindrical from the drop-down list. The mapping projection dictates how the six individual pictures from each camera are stitched into a panoramic display.
Using the Mouse
You can click and drag inside the image display to control the way images render on the screen. These controls do not affect how images are recorded or how the streams are output to other formats. Not all controls work in all display renderings (panoramic, spherical, dome).
Mouse Control
Left-click, drag
Right-click, drag horizontally
Right-click, drag vertically
Scroll wheel
Description
Rotates the yaw view in the direction of the drag. In spherical view, the rotation is sustained at a rate proportional to the speed of the drag. Click again to stop rotation.
Rotates the image pitch. For best results, magnify the image.
Rotates the image roll. For best results, magnify the image.
Magnifies the image display.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
67
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
Mouse Control Description
Right-click, drag horizontally + Shift button
Rotates the yaw view horizontally when image is magnified.
Changing window size Images stretch to fit the current size of the display window.
9.2.4
Stabilizing Image Display
You can adjust the display of images to compensate for the effect of unwanted movement across frames when the camera records on an unstable surface. Image stabilization can be enabled in both live camera and recorded video modes.
Image stabilization is purely image-based. It does not use external sensors to detect motion. Instead, it compares image patterns across successive frames. Therefore, in order for image stabilization to produce good results, the following requirements should be met: n
Shutter speed must be fast enough to produce clear images without motion blur. Images produced outdoors during daylight hours should not be a concern. In darker places, you may need to set the shutter speed manually.
n
There must be patterns across images. If entire images only contain simple textures, such as clear sky or a white wall, the algorithm will have difficuly finding patterns. It is not necessary for all the cameras in the system to have patterns; having patterns on some cameras may suffice. Additionally, patterns should be distant. If they are too close to the camera, there may be errors. n
The frame rate should be fast enough so that the relative movement across frames is not large. If the frame rate is low and the relative movement of images across frames is large, the algorithm may be unable to find patterns. The faster the movement, the faster the frame rate should be.
Although you can enable image stabilization during both live camera and recorded video modes, we recommend using it primarily when outputting stream files. The stabilization algorithm is resource-intensive. When stabilization is enabled during live camera mode, the system may be unable to perform the necessary computations while keeping up with all the incoming frames.
To enable Image Stabilization:
n
In LadybugCapPro, select Enable Stabilization from the Settings menu, or click the Stabilization ( ) icon on the Image Processing toolbar. For more information, see
n
Using the Ladybug API, invoke the ladybugEnableImageStabilization function. For more information, refer to the LadybugProcessStream example, available from the Windows Start menu -> Point Grey Research -> PGR
Ladybug -> Examples.
Once enabled, the panoramic, spherical and dome view outputs become stabilized. Additionally, stream files that are output to JPEG, bmp, or avi files are stabilized.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
68
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
To set Stabilization Parameters:
Depending on your requirements, you can adjust the following image stabilization parameters:
Decay Rate - Specifies the degree of correction across images that the stabilization algorithm uses. The default setting
is 0.9. A setting of 1.0 (maximum) instructs the algorithm to apply correction across all positional difference. A decay rate of 1.0 may result in an undesired drift effect, as the camera's original position may shift slowly over a long period of time. Other unwanted effects may result. For example, when capturing images from a car as it turns at intersections, the algorithm will attempt to recover the image pattern produced before the turn. By setting this value a little lower than 1.0, the display slowly re-adjusts, cancelling the drift effect.
Maximum Search Range - The stabilization algorithm searches for patterns within a series of templates in each frame.
This value specifies the size, in pixels, of each template. If the frame rate is low or movement is fast, try using a larger value for better results. However, keep in mind that a larger value requires more computation.
To Set stabilization parameters, select Options from the Settings menu, or click the Options ( toolbar.
) icon on the Main
9.2.5
Adjusting Sphere Size for Stitching
Using the sphere size control ( ) on the
, you can minimize parallax by changing the sphere radius to which images are calibrated for stitching panoramas. The following options are available:
Fixed Size
By default, the sphere radius is calibrated at 20 m, which is well-suited for most outdoor scenes. However, if most subjects in the scene are closer than 20 m, you may get better stitching results by choosing a smaller radius. A larger sphere radius of 100 m is also available for scenes that are more distant.
One-Shot Dynamic Stitch
One-shot dynamic stitch calculates an optimal sphere radius for the entire scene. Successive frames are stitched to the same radius until another adjustment is made.
Auto Dynamic Stitch
Auto dynamic stitch calculates optimal stitching distances for different areas of the image, so that these distances vary across the entire image. Stitching distances are re-calculated for each frame. Auto dynamic stitch is best used when distances in the same image coordinates vary greatly across successive frames, and prominent stitching errors cannot be fixed using another stitching calculation technique listed above.
Although you can enable auto dynamic stitch during both live camera and recorded video modes, we recommend using it primarily when outputting stream files. The dynamic stitch algorithm is resource-intensive. When enabled during live camera mode, the system may be unable to perform the necessary computations while keeping up with all the incoming frames.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
69
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
You can modify the Minimum distance, Maximum distance, and Default distance used for dynamic stitching. From the
Settings menu, select Options to open the LadybugCapPro Options dialog.
9.2.6
Adjusting Vertical Tilt
Panoramic images produced by the Ladybug camera system are sensitive to the position of the camera during image capture. Due to the nature of the panoramic mapping, if the camera is at a slight tilt, vertical lines in the scene will appear curved in the image. Since it is not always possible to ensure that the camera is perfectly aligned with the vertical axis in the scene, the Ladybug SDK allows you to correct vertical tilt in images. This is done by selecting Set Z
Axis in the Settings menu in the
. You can adjust vertical tilt either during image capture or stream file playback.
The following images show the effect of adjusting for vertical tilt. The image on top shows vertical lines curved. This curvature is corrected in the bottom image.
Figure 9.1: Scene with Tilt Effect
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
70
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
Figure 9.2: Tilt-Adjusted Scene
To adjust Vertical Tilt
When you select Set Z Axis from the Settings Menu, LadybugCapPro prompts you to Shift-click on four points in the image. The first two clicks specify points on a line in the image that should be adjusted vertically, and the second two clicks specify another line in the image that should be adjusted vertically. By doing this, LadybugCapPro can orient the selected lines with the center of the sphere, and re-adjust the Z (vertical) axis of the radial projection. After the fourth click, all the images being captured or replayed are adjusted vertically.
For example, in the first figure above, you might select the following four points to adjust:
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
Figure 9.3: Suggested Clicking Points to Specify Vertical Line Adjustment
71
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
When selecting vertical line clicking points, keep in mind the following: n n
The two points of each line should be as distant as possible.
The two lines should not be too close to each other, or too close in an exact opposite direction (that is, 180 degrees apart).
Vertical tilt adjustment can also be accomplished by manually adjusting the image roll using the mouse. To adjust image roll, right-click and drag the mouse vertically inside the image. For best results, magnify the image first using the mouse scroll wheel.
To undo Vertical Tilt
To undo any vertical tilt adjustments, click the Rotation Angle control ( select Default.
) on the
, and
9.2.7
Adjusting 8-bit Images
Use the Image Processing control ( ) on the
, to make the following adjustments to 8-bit images:
Color Correction
When enabled, overall image hue, intensity and saturation of images can be adjusted. Red, green and blue can also be adjusted individually, which may help to correct white balance issues during image capture.
Color correction may degrade overall image quality.
Sharpening
When enabled, image textures are sharpened. This effect may be most noticeable along texture edges.
Texture Intensity Adjustment
This control is best used when the camera is operating in an
mode, or a stream file is opened that was captured in an independent exposure control mode. When enabled, texture intensities are adjusted to compensate for differences in exposure between individual images. The adjustment process converts integer pixel values to floating point values to achieve higher dynamic range (HDR). For best results, use texture intensity adjustment in combination with one of the following techniques: n n
Tone mapping (below)
Output to .HDR format and process with other software. See
Viewing and Outputting Stream Files.
Tone Mapping
When enabled, the dynamic range of images is converted from high (HDR) to low (LDR) to resemble more closely the
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
72
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
dynamic range of the human eye. The following controls are available: n
Compression—Gamma-style adjustment re-maps image values. A higher value yields greater compression in
bright areas of the image.
n
Local area—Determines the size of the area around each pixel that is used to calculate new values as part of
the overall compression process.
9.2.8
Adjusting 12- and 16-bit Images
Use the Image Processing control ( ) on the
Image Processing Toolbar , to make the following adjustments to 12-
and 16-bit
1 images:
Luminance
n
Black Level—Adjustments to the image Black Level can be enabled and set.
n
Auto Exposure—Adjustments to the image auto exposure settings include selecting an ROI (Full Image, Top
camera only, Bottom only). As well, manual exposure and gain settings can be adjusted.
Tonal
Gamma:
Sensor manufacturers strive to make the transfer characteristics of sensors inherently linear, which means that as the number of photons hitting the imaging sensor increases, the resulting image intensity increases are linear. Gamma can be used to apply a non-linear mapping of the images produced by the camera. Gamma is applied after analog-to-digital conversion and is available in all pixel formats. Gamma values between 0.5 and 1 result in decreased brightness effect, while values between 1 and 4 produce an increased brightness effect. By default, Gamma is enabled and has a value of 1.25. To obtain a linear response, disable gamma.
Tone Mapping:
When enabled, the dynamic range of images is converted from high (HDR) to low (LDR) to resemble more closely the dynamic range of the human eye. The following controls are available: n
Compression—Gamma-style adjustment re-maps image values. A higher value yields greater compression in
bright areas of the image.
n
Local area—Determines the size of the area around each pixel that is used to calculate new values as part of
the overall compression process.
Color
White Balance:
The Ladybug5 supports white balance adjustment, which is a system of color correction to account for differing
1
Due to the 12-bit
ADC , a 16-bit format is 12-bits padded with zeros.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
73
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
lighting conditions. Adjusting white balance by modifying the relative gain of R, G and B in an image enables white areas to look "whiter". Taking some subset of the target image and looking at the relative red to green and blue to green response, the objective is to scale the red and blue channels so that the response is 1:1:1.
The user can adjust the red and blue values. Both values specify relative gain, with a value that is half the maximum value being a relative gain of zero.
Miscellaneous
n
Smear Correction—When enabled, smear is corrected for either unsaturated or full correction. See
n
Noise reduction—When enabled, noise present in the image is reduced.
n
Sharpening—When enabled, image textures are sharpened. This effect may be most noticeable along texture
edges.
n
False color removal—When enabled, removes rainbow sparkles in the image caused by small points of light.
9.2.9
Histogram
Displays a histogram of the values represented in the pixels of the current image.
To display a histogram:
1. From the Imaging Processing toolbar click the button, or, from the Settings menu select Histogram.
2. From Image Information select the channels to view. Red, Green, and Blue are all selected by default.
3. From the Options, select: n
Max Percent allows you adjust the graphical display to view a subset of percentage representation. For
example, to view only the first 5% of the representation of values in the graph, enter '5' in the Max Percent field.
n
All Cameras specifies that the values are compiled from all six cameras on the Ladybug system. To see values
from only one camera at a time, select a camera. (For camera orientation, see
9.3
Saving Images
In both Live Camera and Stream File mode, you can save the current image to panoramic JPEG or panoramic bitmap format, or as six individual color-processed bitmap images, rectified or non-rectified. Images are saved to the My
Documents folder.
To save images:
From the Image menu, select one of the following:
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
74
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
n
Save Panoramic JPG, Save Panoramic BMP or Save Panoramic HDR. You can select a pre-defined resolution,
or a custom resolution.
n
When Custom Size is selected for the first time, enter your desired dimensions. To change your custom dimensions, select Change Custom Size.
n
To set the quality of JPEG compression, see 'Main Toolbar' in
n
Saved Images are rendered only in Panoramic display format, regardless of display setting. n
Save 6 Color Processed Images BMP or Save 6 Rectified Images BMP. These options allow you save separate
color-processed images from each of the six cameras on the system. If the second option is chosen, images are rectified to correct lens distortion.
9.4
Viewing and Outputting Stream Files
When you load a previously-recorded stream file, you can use the the Stream Toolbar to navigate the frames of the stream file and output the file to a variety of different video or image formats, including JPG, BMP, PNG, TIFF8, TIFF16,
HDR, AVI, FLV, WMV and H.264. If outputting to FLV, LadybugCapPro can produce both a panoramic viewer and a spherical viewer with Google Map display.
Use the
to navigate to specific frames for the output.
Use the
to set the output type, format and size, and to convert the file for output.
Selecting an Output Type
A drop-down list of formats for outputting the stream. For more information about these formats, see
Output Type Description
Display only
Displays the frames successively. This option does not create a new file.
Panoramic
Outputs the stream in panoramic view. All output formats are supported. If JPG, BMP, PNG, TIFF or
HDR are chosen, separate files are created for each stitched frame. For more information about .flv
output, see
Outputting Flash Video . To specify a radial or cylindrical mapping projection, select a
Mapping Type using the
.
Dome
Outputs the stream in dome view. Output formats are the same as Panoramic, above.
Stream file
Saves the stream file as another .pgr stream file. This option is useful for creating a stream file out of a subset of frames from the original.
6 Processed
Saves each individual image from each of the camera system's six cameras, that comprises the specified output, as separate BMP or TIFF files. Images are not rectified to correct lens distortion.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
75
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
Output Type
6 Rectified
Description
Similar to '6 Processed,' except each image is rectified to correct lens distortion. Supports JPG, BMP,
PNG, TIFF and HDR formats.
6 Cube map
Saves six individual images that can be used to construct a cube covering the entire field of view.
Supported formats are JPG, BMP, PNG, TIFF and HDR. Cube mapping is an environment mapping method that is useful for creating video game skyboxes and other computer graphics applications.
Cube mapping produces an image that is less distorted than panoramic, cylindrical or dome views.
Images are named ladybug_cube_XXXXXX_Y, where XXXXXX is the frame number and Y indicates the cube face, 0-5. Cube face is numbered as follows:
0-front
1-right
2-back
3-left
4-top
5-bottom
To convert Stream Files
1. Click the button to start conversion. The Confirm Settings dialog opens for specifying an output directory for the output file.
2. If outputting to AVI format, you can specify a video encoding codec compressor. The compressors that are listed depend on the compression software currently installed on your system. Compressors that have been tested by Point Grey Research and are known to work correctly are shaded a different color. For more information about recommended codecs, see Knowledge Base Article 348 .
n
If outputting to FLV format, see
for information about specifying a bit rate and outputting web publishing files.
n
If outputting to WMV format, you can specify a Bit Rate. The bit rate affects the compression quality and file size of the output. Depending on the Output Size specified above, larger sizes generally require larger bit rates for compressing images to an acceptable quality. A higher bit rate results in larger files and longer download time, but higher-quality output. There is no recommended value. You may need to try different values before satisfying your requirements.
3. Parallel processing—When selected, instructs LadybugCapPro to create multiple threads to speed up image processing. This consumes additional system resources. For best performance, we recommend the following system configuration: n n n n
Multi-core CPU
4 GB RAM or more
Optimized hard disk drive configuration, such as RAID 0
Multiple hard drives or partitions, to write the video stream to a different drive.
4. After specifying all applicable settings, click Convert! to create the output file(s).
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
76
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
9.4.1
Outputting Flash Video
With LadybugCapPro, you can output a stream file to Flash video (FLV), which is a convenient format for publishing on the web. LadybugCapPro produces all the necessary files for publishing in a single folder, including both a panoramic viewer and a spherical viewer with Google Map display.
To output Flash Video
1. Open a stream file and specify the frames to output using the
toolbar. Make any necessary image adjustments using the
toolbar.
2. On the
toolbar, specify an Output Size. In the Output Type drop-down list, specify
Panoramic or Dome (FLV). Then click
. The Confirm Settings dialog opens.
3. Under Output Directory, specify a directory to hold the output file(s).
4. Specify a Bit Rate. The bit rate affects the compression quality and file size of the output.
When specifying a bit rate, keep in mind the following:
n
Depending on the Output Size specified above, larger sizes generally require larger bit rates for compressing images to an acceptable quality. n
A higher bit rate results in larger files and longer download time, but higher-quality output. n
There is no recommended value. You may need to try different values before satisfying your requirements.
5. To output web publishing files associated with the FLV file, check Produce web files.
When checked, LadybugCapPro produces the following files in addition to the FLV file:
n panoramic_viewer.html: Presents the video as a panoramic display.
n spherical_viewer.html: Presents the video as a spherical display. To pan and tilt the display, click and drag the mouse, or use the navigational arrows. To zoom in and out, use the +/- controls.
This file is not produced if Dome view is specified in Step 2.
Currently, spherical_ viewer.html is unable to play videos whose
Output Size width or height is greater than 2800.
n spherical_viewer_map.html: This file is produced only if positional data was recorded with the stream file. For more information, see
Working with GPS Data . This file is identical to spherical_
viewer.html, but also includes a synchronized Google Map display. To synchronize the map with the video, click and drag on the video, or use the viewer controls. To synchronize the video with the map, double-click on the map.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
77
Point Grey Ladybug5 Technical Reference
9 Post Processing Control
When spherical_viewer_map.html is installed on a local file system, security restrictions native to the Flash viewer may prevent you from viewing the Google Map associated with the file. To address this issue, you may need to upload the web files to a web server, then access spherical_ viewer_map.html through an HTTP request.
You may also need to obtain a unique Google Maps API key and embed it in spherical_viewer_ map.html. To obtain a Google Maps
API key, visit http://code.google.com/apis/maps/signup.html
.
n frame_info.xml: Produced only if positional data was recorded with the stream file, and is used for integrating the Google Map display with the viewer. n pgrflv.swf: The Flash Player panoramic viewer.
n sphericalViewer.swf: The Flash Player spherical viewer.
n
SkinOverPlaySeekStop.swf: The Flash Player viewer skin.
n
AC_RunActiveContent.js: A javascript file to check the installation of Flash Player.
6. If Produce web files is checked, specify a subfolder, within the specified Output Directory, in which to output the web publishing files.
7. Click Convert!
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
78
Point Grey Ladybug5 Technical Reference
10 Troubleshooting
10 Troubleshooting
10.1
Support
Point Grey Research endeavors to provide the highest level of technical support possible to our customers. Most support resources can be accessed through the Point Grey Product Support page.
Creating a Customer Login Account
The first step in accessing our technical support resources is to obtain a Customer Login Account. This requires a valid name and e-mail address. To apply for a Customer Login Account go to the Product Downloads page.
Knowledge Base
Our Knowledge Base contains answers to some of the most common support questions. It is constantly updated, expanded, and refined to ensure that our customers have access to the latest information.
Product Downloads
Customers with a Customer Login Account can access the latest software and firmware for their cameras from our
Product Downloads page. We encourage our customers to keep their software and firmware up- to- date by downloading and installing the latest versions.
Contacting Technical Support
Before contacting Technical Support, have you:
1. Read the product documentation and user manual?
2. Searched the Knowledge Base?
3. Downloaded and installed the latest version of software and/or firmware?
If you have done all the above and still can’t find an answer to your question, contact our Technical Support team .
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
79
Point Grey Ladybug5 Technical Reference
10 Troubleshooting
10.2
Status Indicator LED
LED Status
Off
Steady green
Flashing yellow/Steady yellow
Steady yellow-green
Steady bright green
Flashing bright, then brighter green
Flashing green and red
Flashing red
Steady red
Description
Not receiving power
Receiving power
Initializing FPGA
Insufficient power
Acquiring and transmitting images
Accessing camera registers (no image acquisition)
Updating firmware
Temporary problem
Serious problem
10.3
Blemish Pixel Artifacts
Cosmic radiation may cause random pixels to generate a permanently high charge, resulting in a permanently lit, or
'glowing,' appearance. Point Grey tests for and programs white blemish pixel correction into the camera firmware.
In very rare cases, one or more pixels in the sensor array may stop responding and appear black (dead) or white
(hot/stuck).
10.3.1
Pixel Defect Correction
Point Grey tests for blemish pixels on each camera. The mechanism to correct blemish pixels is hard-coded into the camera firmware. Pixel correction is on by default. The correction algorithm involves applying the average color or grayscale values of neighboring pixels to the blemish pixel.
Related Knowledge Base Articles
Title Article
How Point Grey tests for white blemish pixels Knowledge Base Article 314
10.4
Vertical Smear Artifact
When a strong light source is shone on the camera, a faint bright line may be seen extending vertically through an image from a light-saturated spot. Vertical smear is a byproduct of the interline transfer system that extracts data from the CCD.
Smear is caused by scattered photons leaking into the shielded vertical shift register. When the pixel cells are full, some charges may spill out in to the vertical shift register. As the charge shifts in/out of the light sensitive sensor area and travels down the vertical shift register, it picks up the extra photons and causes a bright line in the image.
Smear above the bright spot is collected during read out while smear below the bright spot is collected during read in.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
80
Point Grey Ladybug5 Technical Reference
10 Troubleshooting
10.4.1
Smear Reduction
Smear may be minimized using one or more of the following techniques: n
Reduce the bright light source.
n
Increase the shutter time/lower the frame rate. This increases the amount of time light is collected in the photosensors relative to the time in the vertical transfer register.
n
Turn the light source off before and after exposure by using a mechanical or LCD shutter.
n
Use a pulsed or flashed light source. A pulsed light of 1/10,000 duration is sufficient in most cases to allow an extremely short 100 ns exposure without smear.
n
Enable smear correction during post processing. See
Adjusting 12- and 16-bit Images
.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
81
Point Grey Ladybug5 Technical Reference
Appendix A: Ladybug API Examples
Appendix A: Ladybug API Examples
The following examples are included in the Ladybug SDK.
Examples are accessible from:
Start Menu -->All Programs -- >Point Grey Research --> PGR Ladybug --> Examples
With the exception of ladybugCSharpEx and ladybugProcessStream_CSharp, all examples are Visual C++.
Example Description
ladybug3dViewer ladybugAdvancedRenderEx
Shows how to display a spherical view in which the user can pan and tilt the image inside the sphere.
Shows how to draw a Ladybug 3D spherical image in conjunction with other 3D objects.
Shows how to create a C# program that uses the Ladybug API.
Shows how to access the information from the environmental sensors.
Demonstrates how to capture a series of images closely spaced in time suitable for input into a high dynamic range image creation system.
Shows how to apply cube mapping on spherical images to construct a skybox.
Shows how to access Ladybug images directly on the graphics card as an
OpenGL texture map.
ladybugOutput3DMesh ladybugPanoramic ladybugPanoStitchExample
ladybugPostProcessing ladybugProcessStream ladybugProcessStreamParallel
ladybugSimpleGPS ladybugSimpleGrab ladybugSimpleGrabDisplay ladybugSimpleRecording
Demonstrates how to produce a 3D mesh out of calibration data from the connected camera.
Shows how to use a document-view application to grab Ladybug images and display them in a window.
Shows how to extract an image set from a Ladybug camera, stitch it together and write the final stitched image to disk.
Shows how to perform post processing on 12- or 16-bit images.
Shows how to process all, or part, of a stream file. Also available as a C# example: ladybugProcessStream_CSharp.
Shows how to process a Ladybug image stream using multiple Ladybug context parallel processing.
Shows how to use a GPS device in conjunction with a Ladybug camera to integrate GPS data with Ladybug images.
Illustrates the basics of acquiring an image from a Ladybug camera.
Shows how to use the OpenGL Utility Toolkit (GLUT) to grab Ladybug images and display them in a simple window.
Shows how to record Ladybug images to .pgr stream files. When used in conjunction with a GPS device, also shows how to record images when the GPS location changes after a specified distance.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
82
Point Grey Ladybug5 Technical Reference
Appendix A: Ladybug API Examples
Example
ladybugStitchFrom3DMesh ladybugStreamCopy ladybugTranslate2dTo3d ladybugTriggerEx
Description
Shows how to stitch six raw images without using the Ladybug SDK.
Copies images from a Ladybug source stream to a destination stream.
Shows how to translate a 2D point in the raw image to a 3D point.
Shows how to control trigger and strobe.
A.1
ladybug3dViewer
This example shows how to display a spherical view in which the user can pan and tilt the image inside the sphere.
The program reads one rectangular panoramic image (.bmp or .ppm) and maps it onto the sphere using OpenGL functions.
This program does not handle video or .pgr format stream files, and does not require the Ladybug SDK API.
A.2
ladybugAdvancedRenderEx
This example shows how to draw a Ladybug 3D spherical image in conjunction with other 3D objects. To render 3D objects together with a Ladybug 3D spherical image, ladybugDisplayImage () must be called prior to drawing any objects. The size and position of the objects must be inside the Ladybug spherical image. Otherwise, the objects will not be seen. Additionally, the OpenGL depth test must be enabled.
This example must be run with glut32.dll.
This example must open the following .ppm texture files:
n n n n n n
TextureCam0.ppm
TextureCam1.ppm
TextureCam2.ppm
TextureCam3.ppm
TextureCam4.ppm
TextureCam5.ppm
A.3
ladybugCaptureHDRImage
This example code demonstrates how to capture a series of images closely spaced in time suitable for input into a high dynamic range image creation system.
The main() function initializes the camera and calls the other subroutines.
The setupHDRRegisters() subroutine sets all of the registers necessary to put the camera into 'HDR Mode'.
captureImages() captures images directly from a Ladybug camera.
processImages() computes the panoramic images.
The Ladybug has a bank of four gain and shutter registers in addition to its standard set. When put into 'HDR Mode', the camera cycles through the settings contained in these registers on an image by image basis. This allows users to
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
83
Point Grey Ladybug5 Technical Reference
Appendix A: Ladybug API Examples
capture a set of four images with widely varying exposure settings. The four images can be captured within 4/30 of a second if the data format is set to LADYBUG_DATAFORMAT_COLOR_SEP_SEQUENTIAL_JPEG.
The shutter and gain values are read from an INI file defined by INI_FILE_NAME. If you find the shutter and gain settings are not appropriate, change the data in this file.
Once these images have been captured, the program processes the images and outputs a configuration file containing exposure data suitable for input into a program such as 'pfstools' and 'pfscalibration'.
Having captured the images, the user should then run the 'pfsinhdrgen' program in the image directory with a command line similar to the following: pfsinhdrgen HDRDescription.hdrgen | pfshdrcalibrate -v | pfsout output.hdr
Where 'HDRDescription.hdrgen' is the name of the configuration file output by this program and 'output.hdr' is the name of the output image.
Then the output file can be viewed using pfsview ( or pfsv). pfsv output.hdr
You can also make an HDR image out of four output images using Adobe Photoshop CS3, easyHDR, etc. In this case, you don't need to provide additional exposure data.
'pfstools' is available at http://pfstools.sourceforge.net/
'pfscalibration' is available at: http://www.mpi-inf.mpg.de/resources/hdr/calibration/pfs.html
A.4
ladybugCSharpEx
This example shows how to create a C# program that uses the Ladybug API. The program can display stitched panoramic images either from the camera or from a stream file. If a stream file contains GPS information, it is also displayed. The program requires LadybugAPI.cs and LadybugAPI_ GPS.cs, which define the interface of the Ladybug
API for the C# language.
A.5
ladybugEnvironmentalSensors
This example shows how to use the Ladybug API to obtain data from the environmental sensors on the Ladybug5. In addition to reading the raw information from the camera, the example shows how to calculate the camera's heading from the raw compass values.
A.6
ladybugEnvMap
This example illustrates how to apply cube mapping on Ladybug's spherical images to construct a skybox. In computer graphics, cube mapping is a type of environment mapping used to simulate surfaces that reflect the scene at a distant location.
Here, Ladybug images are used as the environment and are updated in real time. For each scene, six surfaces of a cube are rendered. This is done by rendering Ladybug's spherical view 6 times, setting the field of view to 90 degrees and positioning the virtual camera to specific surface directions. These rendering results are then used as textures for
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
84
Point Grey Ladybug5 Technical Reference
Appendix A: Ladybug API Examples
the cube mapping. The overall scene, which is comprised of the reflective objects, is then rendered. All calculations required to construct the cube map are handled inside the OpenGL library.
This example must be run with freeglut.dll present.
This example must be run with freeglut32.dll present.
A.7
ladybugOGLTextureEx
This example shows how to access Ladybug images directly on the graphics card as an OpenGL texture map. The rendered Ladybug image is accessed by its texture ID and can be mapped to any geometric object as desired by using
OpenGL functions.
Right click the mouse in the client area to display a menu and select various Ladybug image types.
This example must be run with glut32.dll.
A.8
ladybugOutput3DMesh
This example demonstrates how to produce a 3D mesh out of calibration data from the connected camera. The output of this program can be directly used for the input of the program
ladybugStitchFrom3DMesh . You can save the
output of this program to a file by using redirection. From the command prompt, navigate (cd) to the Ladybug SDK's
"bin" directory and type ladybugoutput3dmesh >mymesh.txt. You will then have the output in the file "mymesh.txt".
A.9
ladybugPanoramic
This example shows how to use a document-view application to grab Ladybug images and display them in a window.
The CLadybugPanoramicDoc class is used to initialize and start a Ladybug camera. It creates a thread for grabbing and processing images.
The CLadybugPanoramicView class is used to display Ladybug images. It initializes the window for OpenGL display. To display a ladybug image, CLadybugPanoramicView::OnDraw() calls the image-drawing API functions.
A.10 ladybugPanoStitchExample
This example shows how to extract an image set from a Ladybug camera, stitch it together and write the final stitched image to disk.
Since Ladybug library version 1.3.alpha.01, this example is modified to use ladybugRenderOffScreenImage (), which is hardware accelerated, to render the stitched images.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
85
Point Grey Ladybug5 Technical Reference
Appendix A: Ladybug API Examples
Typing ladybugPanoStitchExample /? (or ? -?) at the command prompt prints the usage information of this application.
A.11 ladybugPostProcessing
This example shows how to use the post processing pipeline introduced in the Ladybug 1.7 API together with a 12- or
16- bit image from the Ladybug5 to produce a processed image. The code shows how to modify the
LadybugAdjustmentParameters structure to define the type of post processing to perform.
A.12 ladybugProcessStream
This example shows how to process all, or part, of a stream file. The program processes each frame and outputs an image file sequentially. If the stream file contains GPS information, the program outputs the information to a separate text file. By editing this source code, users can change image size, image type, output file format, color processing algorithm, and blending width. Users can also change options for falloff correction, software rendering and stabilization.
This example is also available in C# as ladybugProcessStream_CSharp.
A.13 ladybugProcessStreamParallel
This example shows how to process a Ladybug image stream using multiple Ladybug context parallel processing.
This program creates a stream reading thread and one or more image processing threads. The stream reading thread reads images from a stream and puts them into a buffer queue. Each processing thread gets images from the buffer queue and processes the images concurrently with other threads.
The number of processing threads you can create depends on many factors such as image resolution, color processing method, the size of the rendered image and the size of the graphics card memory.
If the required resources are beyond the ability of the graphics card, the program may report a run-time error. For example, one full-resolution frame from a Ladybug3 is 1616 x 1232. Rendering a 4096 x 2048 off-screen image using the LADYBUG_ HQLINEAR color processing method requires at least 110 MB of GPU memory. In this case, with
512 MB of graphics card memory, you may run three threads. This allows for image processing plus additional GPU memory allocations that are necessary, such as image display. More than three threads may cause an error.
The overall processing speed depends on several factors: disk I/O speed, number of CPUs and performance of the graphics card. For fast stream processing, we recommend the following: n n n n
Multi-core processor machine
Graphics card with 512 Mbytes memory or more
Fast hard disk drive configuration, such as RAID0
Reading the stream from one drive and writing the rendered images to another drive.
This example reads the processing parameter options from the command line. Use -? or -h to display the usage help.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
86
Point Grey Ladybug5 Technical Reference
Appendix A: Ladybug API Examples
A.14 ladybugSimpleGPS
This example shows how to use a GPS device in conjunction with a Ladybug camera to integrate GPS data with
Ladybug images.
Before running this example, you need to know the COM port to which the GPS device is mapped, even if the device uses a USB interface. Right click on "My Computer" from the Windows Start menu. Under the "Hardware" tab, click
"Device Manager." Expand the "Ports (COM & LPT)" node and note the COM port to which the GPS device is mapped.
A.15 ladybugSimpleGrab
This example illustrates the basics of acquiring an image from a Ladybug camera. The program performs the following tasks:
1. Creates a context.
2. Initializes a camera.
3. Starts the transmission of images.
4. Grabs an image.
5. Processes the grabbed image using a color processing algorithm.
6. Saves the 6 raw images as BMP files.
7. Destroys the context.
A.16 ladybugSimpleGrabDisplay
This example shows how to use OpenGL Utility Toolkit (GLUT) to grab Ladybug images and display them in a simple window. This example starts the first Labybug camera on the bus. The camera is started in JPEG mode and the images are processed with the LADYBUG_DOWNSAMPLE4 color processing method.
Right click the mouse in the client area to display a menu and select various Ladybug image types.
This example must be run with glut32.dll.
A.17 ladybugSimpleRecording
This example shows how to record Ladybug images to .pgr stream files. The example starts the first Ladybug camera on the bus with the parameters in the .ini file defined by INI_FILENAME.
This example displays the grabbed images only when the grabbing function returns LADYBUG_TIMEOUT. This means that saving images is the highest priority.
Right click the mouse in the client area to popup a menu and select various options, or use the following hot keys: n n n
'r' or 'R' - start recording, press again to stop recording.
'p' or 'P' - display panoramic image.
'a' or 'A' - display all-camera image.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
87
Point Grey Ladybug5 Technical Reference
Appendix A: Ladybug API Examples
n n
'd' or 'D' - display dome view image.
'Esc', 'q' or 'Q' - exit the program.
When used in conjunction with a GPS device, this example also shows how to record images when the GPS location changes after a specified distance, in meters. The distance parameter is specified in the .ini file. The accuracy of the result depends on the GPS device and the GPS data update rate.
This example must be run with freeglut.dll and Ladybug SDK v. 1.3.0.2 or later.
A.18 ladybugStitchFrom3DMesh
This example shows how to stitch six raw images without using the Ladybug SDK. Note that users still need the 3D mesh data produced by the program
ladybugOutput3DMesh , which requires the Ladybug SDK.
This program is useful for users who want to stitch images in an environment where the Ladybug SDK is not supported.
A.19 ladybugStreamCopy
This program copies images from a Ladybug source stream to a destination stream. If a calibration file is specified, this program writes this calibration file to the destination file instead of using the calibration file in the source stream.
The last two arguments specify how many images to copy. If they are not specified, all the images are copied.
A.20 ladybugTranslate2dTo3d
This example shows how to use the Ladybug API to translate a 2D point in the raw image to a 3D point in the Ladybug camera coordinate space and vice versa. It also shows how to use ladybugGet3dMap() provided by the Ladybug API to perform the translation.
This example is a companion to TAN2012009 Geometric Vision using Ladybug Cameras found in Knowledge Base
Article 399 .
A.21 ladybugTriggerEx
This example shows how to use the Ladybug API to control the trigger and strobe functionality of the camera. The example sets the camera into trigger mode 0 (Standard) and then uses software triggering to trigger when an image is captured.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
88
Point Grey Ladybug5 Technical Reference
Appendix B: Stream File Format
Appendix B: Stream File Format
B.1
File Signature
Every Ladybug stream file starts with a signature. This signature uniquely identifies the file as a Ladybug stream file.
Offset
0x0000
Name
Signature
Bytes
16
Type
Character string
Value
PGRLADYBUGSTREAM
Description
Ladybug Stream file identifier
B.2
Stream Header Structure
The stream header structure begins immediately after the file header at offset 16 from the beginning of the file. It contains the information defined by LadybugStreamHeadInfo in ladybug.h. The byte order of this data block is little endian.
Offset Bytes Type Description
0x0000
0x0004
0x0008
0x000C
0x0010
0x0078
0x007C
0x0080
0x0084
0x0088
0x008C
0x0090
0x0094
Name
Ladybug stream version no.
Frame rate
Base serial No.
Head serial No.
Reserved
Data format
Resolution
Stippled format
Configuration data size
N - Number of images
M- Number of index
K - Increment
Stream data offset
4
4
4
4
4
4
104
4
4
4
4
4
4 unsigned int unsigned int unsigned int unsigned int unsigned int unsigned int unsigned int unsigned int unsigned int unsigned int unsigned int unsigned int unsigned int
Stream version number
The frames recorded per second
Ladybug base unit serial number
Ladybug head unit serial number
Reserved space
Image data format defined in ladybug.h
Image resolution defined in ladybug.h
Image Bayer pattern
Number of bytes of the configuration data
Number of images in this stream file
Number of entries used in the index table
Interval value for Indexing the images
Offset of the first image data
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
89
Point Grey Ladybug5 Technical Reference
Offset
0x0098
0x00B8
0x00BC
0x00C0
0x00C4
0x00C8
0x00CC
0x00D0
0x00D4
0x00D8
0x00DC
0x00E0
0x00E4
…
0x009C
0x00A0
0x00A4
0x00A8
0x00AC
0x00B0
0x00B4
Name
GPS summary data offset
GPS summary data size
Frame header size
Humidity availability
Humidity minimum
Humidity maximum
Air pressure availability
Air pressure minimum
Air pressure maximum
Compass availability
Compass minimum
Compass maximum
Accelerometer availability
Accelerometer minimum
Accelerometer maximum
Gyroscope availability
Gyroscope minimum
Gyroscope maximum
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
Bytes
Frame rate 4
Reserved space
…
780
…
Type
unsigned int unsigned int unsigned int bool unsigned int unsigned int bool unsigned int unsigned int bool unsigned int unsigned int bool unsigned int unsigned int bool unsigned int unsigned int float unsigned int
…
Appendix B: Stream File Format
Description
Offset of GPS summary data block
Size of GPS summary data block
Size of internal frame header.
Whether humidity sensor is available
Minimum value for sensor
Maximum value for sensor
Whether air pressure sensor is available
Minimum value for sensor
Maximum value for sensor
Whether compass sensor is available
Minimum value for sensor
Maximum value for sensor
Whether accelerometer sensor is available
Minimum value for sensor
Maximum value for sensor
Whether gyroscope sensor is available
Minimum value for sensor
Maximum value for sensor
Actual frame rate, represented as a floating point value.
Reserved space
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
90
Point Grey Ladybug5 Technical Reference
Offset
Image index
[M-1]
…
0x0BE4
0x0BE8
0x0BEC
Name
4
…
Image index [2]
Image index [1]
Image index [0]
Appendix B: Stream File Format
Bytes
4
4
…
4 unsigned int
Type
Offset of image
(M-1)*K
… unsigned int unsigned int unsigned int
Description
Offset of image 2*K
Offset of image K
Offset of image 0.
The image index table between 0x03F0 and 0x0BF0 is used to locate the keyframes of the stream. Using this table can speed up image searching. The value of K (offset 0x0090) means that the index table contains the offset values for every K th image. The offset values are relative to the beginning of the stream file. For example, if K = 50, the value of 'Image index [5]' is the offset of image 250 (K * 5 = 250). It is the location of image 250 relative to the first byte of the stream file.
B.3
Configuration Data
The configuration data begins immediately after stream header structure. The data is in ASCII text format. It is extracted from the Ladybug camera head for image calibration. The size of this data block is the value of 'Configuration
Data Size' as defined in the Stream Header Structure.
B.4
Frame Header
Since v7 of the frame header, there is a frame header at the start of each image. The size of the frame header can be found in the stream header. Frame headers are present regardless of whether the image data format is JPEG or uncompressed. The information in the frame header can be found in the LadybugImageHeader structure.
B.5
JPEG Compressed Image Data Structure
If the image format specified for recording is JPEG, each image for the six camera sensors is JPEG compressed in four separate Bayer channels. Therefore, a frame of ladybug image has 24 JPEG data blocks.
The first frame of JPEG images begins immediately after the configuration data. The second frame follows the first frame, the third frame follows the second, and so on. The offset value of the first JPEG image, relative to the beginning of the file, is the value of Stream Data Offset as defined in the Stream Header Structure.
The general layout of a JPEG compressed LadybugImage is as follows:
Image Header (0x000 – 0x400)
Cam 0 Bayer 0
Cam 0 Bayer 1
Cam 0 Bayer 2
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
91
Point Grey Ladybug5 Technical Reference
Appendix B: Stream File Format
Cam 0 Bayer 3
Cam 1 Bayer 0
...
Cam 4 Bayer 3
Cam 5 Bayer 0
Cam 5 Bayer 1
Cam 5 Bayer 2
Cam 5 Bayer 3
GPS NMEA data
For each compressed Ladybug image, the GPS NMEA sentences are written at the end of each JPEG image data and are located at the offset value of GPS_Offset. If there is no GPS data, GPS_Offset and GPS_Size are set to zero.
The byte order of this data block is big endian.
Offset
0x0000
Name
Timestamp
Value
0x0004
0x0008
0x000C
0x0010
0x0014
Reserved
Data size
0xCAFEBABE
4
4
4
Bytes
4
4
4
Type
unsigned int
N/A unsigned int
N/A
Character unsigned int
Description
The cycle time seconds, cycle time count and cycle offset of this image
N/A
The total data size of the this frame, including the padding block
Filled with 0s
Unique fingerprint
Version number
0x0018
0x001C
0x0020
0x0024
0x0028
0x0040
0x0044
Reserved
Fingerprint
Version
Number
Time (seconds)
Time
(microseconds)
Sequence ID
Refresh Rate
Gain[6]
White balance
Bayer gain
24
4
4
4
4
4
4 unsigned int unsigned int
Timestamp, in seconds (UNIX time epoch)
Microsecond fraction of above second unsigned int unsigned int unsigned int unsigned int unsigned int
Image sequence number
Horizontal refresh rate
Gain values for each camera
White balance
Same as register 0x1044
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
92
Point Grey Ladybug5 Technical Reference
Appendix B: Stream File Format
0x0058
0x0070
0x0088
0x0300
0x0338
0x033C
0x0340
Offset
0x0048
0x004C
0x0050
0x0054
...
Name
Bayer map
Brightness
Gamma
Head Serial
Number
Shutter[6]
Free space
Free space
Free space
GPS data offset
GPS data size
JPEG data offset
JPEG data size
JPEG data offset
JPEG data size
JPEG data offset
JPEG data size
JPEG data offset
JPEG data size
JPEG data offset
JPEG data size
...
JPEG data offset
JPEG data size
Value
GPS_Offset
GPS_Size
Offset_0_0
Size_0_0
Offset_0_1
Size_0_1
Offset_0_2
Size_0_2
Offset_0_3
Size_0_3
Offset_1_0
Size_1_0
...
Offset_5_2
Size_5_2
4
...
4
Bytes
4
4
4
4
Type
unsigned int unsigned int unsigned int unsigned int
Description
Same as register 0x1040
Brightness
Gamma
Serial number of Ladybug Head
4
4
4
4
4
4
4
4
56
4
4
4
24
24
632 unsigned int
N/A
N/A
N/A unsigned int unsigned int unsigned int
Shutter values for each camera
Reserved space
Random data
Filled with 0s
The offset of GPS data
The size of GPS data
Cam 0, Bayer Channel 0
4 unsigned int unsigned int
Cam 0, Bayer Channel 0
Cam 0, Bayer Channel 1 unsigned int unsigned int
Cam 0, Bayer Channel 1
Cam 0, Bayer Channel 2 unsigned int unsigned int
Cam 0, Bayer Channel 2
Cam 0, Bayer Channel 3 unsigned int unsigned int
Cam 0, Bayer Channel 3
Cam 1, Bayer Channel 0 unsigned int
...
unsigned int
Cam 1, Bayer Channel 0
...
Cam 5, Bayer Channel 2 unsigned int Cam 5, Bayer Channel 2
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
93
Point Grey Ladybug5 Technical Reference
Appendix B: Stream File Format
Offset
0x0400
Offset_ i_j
Name
JPEG data offset
JPEG data size
JPEG data
JPEG data
Value
Offset_5_3
Size_5_3
...
GPS_
Offset
...
GPS NMEA data
Bytes
4
Type
unsigned int
Description
Cam 5, Bayer Channel 3
4
...
Size_i_ j unsigned int
Binary
Binary
...
GPS_
Size
...
ASCII text
Cam 5, Bayer Channel 3
...
Beginning from offset 0x0400 are the 24 JPEG data blocks for Camera i, Bayer Channel j, where i = 0, 1, 2, 3, 4, 5, 6 and j = 0, 1, 2, 3.
...
GPS NMEA sentences
The four bytes of timestamp data at offset 0x0000 are the cycle time seconds, cycle time count and cycle offset when the image is captured.
Description Cycle Time (seconds) Cycle Time (ms) Cycle Offset
Range
Bits
0-127
0-6
0-7999
7-19
0-3071
20-31
For more information about Ladybug time stamp, see the definition of LadybugTimestamp struct in ladybug.h.
The data between offset 0x0010 and 0x008F contains the information of the LadybugImageInfo structure defined in ladybug.h.
B.6
Uncompressed Image Data Structure
If the image format is uncompressed, the image data is the raw binary data from the camera. The first frame begins immediately after the configuration data. The number of bytes for each of the six images is determined by image resolution and data format as defined in the Stream Header Structure.
The Bayer pattern of the image is defined by Stippled Format as defined in the Stream Header Structure.
For uncompressed Ladybug images, the GPS NMEA sentences are written to the last 1024 bytes of the image data of camera 5. This means that the last 1024 bytes of image data will be overwritten by GPS data if the GPS device is available.
The following table lists the data structure of each uncompressed image, assuming a resolution of LADYBUG_
RESOLUTION_1632x1232.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
94
Point Grey Ladybug5 Technical Reference
Appendix B: Stream File Format
Offset Name
0x00000000 Image Data
Type
Binary
Description
Cam-0, Bayer pattern image data
0x001EAE00 Image Data
0x003D5C00 Image Data
0x005C0A00 Image Data
0x007AB800 Image Data
Binary
Binary
Binary
Binary
Cam-1, Bayer pattern image data
Cam-2, Bayer pattern image data
Cam-3, Bayer pattern image data
Cam-4, Bayer pattern image data
0x00996600 Image Data Binary Cam-5, Bayer pattern image data
0x00B81400 GPS NMEA data ASCII text 1024 bytes space for GPS NMEA sentences
B.7
GPS Summary Data Format
The GPS summary data begins immediately after the image data discussed in JPEG Compressed or Uncompressed
Image. The offset value relative to the beginning of the file is the value of GPS Summary Data Offset as defined in the Stream Header Structure. The data structure of the GPS summary data is defined by GPS3DPoint in ladybugstream.h. No other groups are defined in version 1.2 Beta 19 or earlier. The byte order of this data block is big endian.
Offset Name
0x0000 Data identifier
0x0010 Reserved
0x0020 Item data size
0x0024 Number of Items
0x0028 Image No.
0x002C Longitude
0x0034 Latitude
0x003C Altitude
0x0044 Image No.
0x0048 Longitude
0x0050 Latitude
0x0058 Altitude
…
Image No.
Longitude
Latitude
Altitude
Value Bytes Type
GPSSUMMARY_00001 16
Filled with 0’s 16
4
4
4
8
8
8
4
8
8
…
8
…
4
8
8
8
Description
Characters First group identifier
N/A Reserved space unsigned int Size of each data item unsigned int The number items unsigned int Associated image number double double
Longitude of item 0
Latitude of item 0 double Altitude of item 0 unsigned int Associated image number double double
Longitude of item 1
Latitude of item 1 double
...
Altitude of item 1
...
unsigned int Associated image number double Longitude of item N-1 double double
Latitude of item N-1
Altitude of item N-1
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
95
Point Grey Ladybug5 Technical Reference
Appendix C: Calibration and Coordinate System
Appendix C: Calibration and Coordinate System
Effective warping and stitching of the images produced by the camera system's six sensors is achieved through accurate calibration of the physical location and orientation of the sensors and the distortion model of the lens. This section discusses the representation used to describe the physical orientation of all of the sensors with respect to one another. The Ladybug software manages the camera coordinate system by breaking it down into seven right-handed coordinate frames of one of two types: six independent image sensor coordinate frames and a camera coordinate frame.
C.1
Coordinate Systems on Ladybug Cameras
Each lens has its own right-handed 3D coordinate system. As well there is a Ladybug 3D Coordinate system that is associated with the camera as a whole. This makes a total of seven 3D coordinate systems on every Ladybug camera.
As well, there is a 2D pixel-grid coordinate system for each sensor.
C.1.1
Lens 3D coordinate system
Each of the six lenses has its own 3D coordinate system.
n n n n
Origin is the optical center of the lens
Z-axis points out of the sensor towards the scene – i.e. it is the optical axis
The X- and Y-axes are relative to the pixel grid of the image sensor associated with that lens n
The Y-axis points along the image columns. The positive Y direction is in the direction of ascending row n number. This points down from the point of view of a normally oriented image
The X-axis points along the image rows. The positive X direction is in the direction of ascending column number. This points to the right in a normally oriented image
This coordinate system is used to represent 3D space from the point-of-view of each lens/sensor pair. Its units are meters, not pixels.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
96
Point Grey Ladybug5 Technical Reference
Appendix C: Calibration and Coordinate System
C.1.2
Sensor 2D coordinate system
Each sensor has its own 2D coordinate system.
n n n n
The u- and v-axes are the image based 2D image coordinate system for the rectified image space and are measured in pixels
The origin of the coordinate system is at the intersection of the optical axis and the rectified image plane and differs for each sensor
The u-axis points along the rows of the image sensor in the direction of ascending column number (i.e. to the right)
The v-axis points along the columns in the direction of ascending row number (i.e. down).
C.1.3
Ladybug Camera Coordinate System
The Ladybug Camera coordinate system is centered within the Ladybug case and is determined by the position of the
6 lens coordinate systems.
n n n n n n
Origin is the center of the five horizontal camera origins
Z-axis is parallel to the optical axis of the top lens (lens 5) (*)
X-axis is parallel to the optical axis of lens 0 (*)
Y-axis is consistent with a right handed coordinate system based on the X- and Z-axes
There may be some variations from LD2 – LD3 – LD5
(*) Note – due to assembly tolerances the optical axes of lens 5 and lens 0 will typically not be perfectly perpendicular. The X-axis of the Ladybug Camera coordinate system is adjusted slightly to ensure that they are perpendicular.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
97
Point Grey Ladybug5 Technical Reference
Appendix C: Calibration and Coordinate System
For more detailed information on the representation used to describe the physical orientation of all of the sensors with respect to one another and provides instructions for transforming 2D local points to 3D global points and vice versa, see Knowledge Base Article 399 .
C.2
Projection Types
Once a
three-dimensional spherical coordinate system
is obtained, the image on the sphere can be projected to a mapping based on different projection methods. The projected image is usually two-dimensional so that it can easily be displayed on a monitor or printed on paper. Each projection type has its own benefits and shortcomings.
Radial (Equirectangular) Projection
This is one of the most popular projections, and the output image is easy to use. In LadybugCapPro, you can output video to this projection by using the
and selecting Output Type “Panoramic.” Then, using the
Image Processing Toolbar , specify Mapping Type “Radial.”
The projected image has two coordinates – theta for horizontal, and phi for vertical. The projection equation from the spherical coordinate system is as follows:
Where (X, Y, Z) are points on the spherical coordinate system, and ATAN2 and ACOS are functions provided by the standard C library.
(Ө,Ф) are coordinates of the projected image.
The range of values of Ө is –Pi to Pi.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
98
Point Grey Ladybug5 Technical Reference
Appendix C: Calibration and Coordinate System
The range of values of Ф is 0 to Pi.
In order to convert to the actual pixel position on the radial projection image, appropriate scaling is needed based on these value ranges.
(Ө,Ф) can be obtained by referring to the fTheta and fPhi members of the LadybugPoint3d struct, which is obtained by invoking ladybugGet3dMap().
The benefit of this projection is that all the points in the original spherical coordinate system can be mapped on a single image. Additionally, the correspondence of the original point and the projected point is simple, in that the horizontal axis corresponds to longitude and the vertical axis corresponds to latitude of a globe. However, this projection suffers from the disadvantage of pixels becoming increasingly stretched out as one approaches the poles of the sphere (top and bottom of the projected image).
Example radial projection image
Cylindrical Projection
This projection is similar to the radial projection, but with a limited field of view, as areas close to the poles are not able to be rendered. In LadybugCapPro, you can output video to this projection by using the
and selecting Output Type “Panoramic.” Then, using the
Image Processing Toolbar , specify Mapping Type
"Cylindrical.”
The projection equation is as follows:
(Ө,Ф) are the coordinates of the projected image.
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
99
Point Grey Ladybug5 Technical Reference
Appendix C: Calibration and Coordinate System
Ө is computed in the same manner as in the radial projection.
Ф can go to infinity as the 3D point nears the pole, which is why the field of view of the cylindrical projection must be limited. When rendered using LadybugCapPro, only the field of view between - 45 degrees and 45 degrees is displayed. Thus this projection is useful when only the images from the side cameras are needed.
(Ө,Ф) can be obtained by referring to the fCylAngle and fCylHeight members of the LadybugPoint3d struct, which is obtained by invoking ladybugGet3dMap().
Example cylindrical projection image
Dome Projection
This is a projection that maps the sphere to a dome-like shape. In
, select Output Type "Dome." The projection equation is as follows:
(U, V) are coordinates of the dome-projected image. With this projection, the north pole of the sphere is drawn in the center and the south pole is drawn as the outer rim of the dome.
You can limit the range of the image to be rendered by calling the function ladybugChangeDomeViewAngle().
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
100
Point Grey Ladybug5 Technical Reference
Appendix C: Calibration and Coordinate System
Example dome projection image
Cubic (Skybox) Projection
This projection requires 6 images, where each image is the surface of a cube. By dividing the entire sphere into 6 images, the distortion in each image is limited. To reconstruct the panoramic image, the 6 cube-suface images must be displayed using 3D computer graphics. Thus, this mapping is suitable for video game applications. The projection equation is as follows:
This projection is equivalent to the view captured by a lens that has no distortion (pin-hole lens). Each surface of the cube can be obtained by rendering the spherical image while setting the appropriate camera rotation and field of view to exactly 90 degrees. This is achievable by calling ladybugSetSphericalViewParams(). To use
select Output Type "6 Cube map."
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
101
Point Grey Ladybug5 Technical Reference
Appendix C: Calibration and Coordinate System
Example cubic projection images
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
102
Point Grey Ladybug5 Technical Reference
Revision History
Revision
1.0
1.1
1.2
Date
January 21, 2013
January 25, 2013
January 30, 2013
Notes
Initial release - support for model LD5-51S5C-44R/B
Updated Stream File Format
Clarified mounting instructions
Clarified 16-bit pixel format and 12-bit ADC
Minor edits to clarify user interface controls
Added information on desiccant plug
Revision History
Revised 1/30/2013
Copyright ©2013 Point Grey Research Inc.
103
advertisement
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project