VideoStitch workspace offers 4 panels, available from the upper right corner.
- the source panel displays the input videos
- the output panel allows you to preview the stitched result
- the interactive panel allows you to preview the stitched video in an interactive viewer
- the process panel is the place to process the video and adjust settings related to the stitching project
The timeline allows you to play, pause, seek the videos.
- the interactive timeline displays playback progress
- the first frame’s timecode, and button to set it to the current frame
- the last frame’s timecode, and button to set it to the current frame
- the ‘working sequence’ that will be processed when rendering the output video file
- Use the ‘start time‘ & ‘stop time‘ buttons to set the in and out points of the working sequence.
- Process settings apply to both the GUI preview and the output video file.
- Using a reasonably low resolution and ‘Linear‘ blending improves significantly the preview playback speed.
Stitcher settings :
- Width / Height : size of the stitched video
- Blender : the type of blending used to stitch videos together. This affects
- Linear : a simple yet efficient and extremely fast blending.
- Multiband : a more complex blending type. Requires more graphics memory.
Output file settings :
- Start time / Stop time : define the in andout points of the sequence to be processed.
- Copy sound from input : copy the sound from one input into the processed output video file
- Project Folder : a shortcut to open the current project folder
VideoStitch now shows preview playback speed – the speed at which it is stitching and displaying your project.
The playback speed is automatically limited to the speed of the original videos.
With high end graphics cards, CPU may become the bottleneck for VideoStitch to perform even faster.
- GPU Memory Usage : diplays “used MB” / “total available MB” values. The “GPU Memory Usage” includes not only VideoStitch, but also all the applications currently using the graphics card.
- GeForce GTX690 : the CUDA device (graphics card) currently used by VideoStitch’s GUI. The maximum memory available depends on the graphics card. If you have multiple graphics cards (CUDA devices) available on your system, you can set the one used by the GUI from the “preferences” menu.
- Stitched size : the size at which the VideoStitch is currently rendering. The stitched size has an impact on how much memory is used, and on the rendering speed. It can be set in the “Process” panel.
In general, the more graphics memory available for VideoStitch, the higher output resolution you will be able to reach.
Some high end graphics cards embed multiple GPUs. These cards behave just like a mutliple GPU setup and will display as 2 different CUDA devices. The amount of memory that VideoStitch can use is the ‘per GPU’ graphics memory - eg: the GeForce GTX690 advertised as a 4Gb card, offers 2Gb per GPU and will only allow stitching what 2Gb can handle.
Setting up preferences
The preferences panel is accessed through ‘Edit > Preferences‘
CUDA device : allows you to specify which graphics card the GUI should work with. The GUI only handles 1 device. You can specify different devices for processing (VideoStitch Extended only).
Calibration tool : set the path to your calibration tool of choice it should be the PTGui or Hugin executable path. This is used by VideoStitch to bootstrap calibration with these softwares.
Language : Allows for setting the GUI language. Currently, French and English are the only available translations. (You need to restart VideoStitch in order for VideoStitch to reflect language changes)
Left & right arrows : previous & next frames
Space bar : play/pause
Ctrl+J : jump to a given frame
Shift + Home : Set the first frame
Shift + End : Set the last frame
Ctrl + T : Apply template
Ctrl + E : Extract current frames from input videos
Ctrl + Shift + E : Extract current frames without any dialog
Ctrl + F5 : reload the current project
Calibrations and templates
What’s a calibration ?
A calibration is simply a PTGui or Hugin panorama project that is used as a template in VideoStitch. While these softwares can stitch together still images, VideoStitch has been designed and optimised for video processing. When loading videos in VideoStitch to create a new project, you need to provide such a calibration file that configures how the videos will be stitched together. You can either :
- Create a new calibration
- Apply an existing calibration
- Creating a set of quality calibrations for your camera array is the key to an efficient video stitching workflow. A quality calibration can easily be re-used to bootstrap new VideoStitch projects.
From calibration files, VideoStitch imports :
- output panorama
- Global exposure & white balance
- Output projection
- for each input
- Image size & crop parameters
- Orientation and position parameters (yaw, pitch, roll, viewpoint correction, shift)
- Lens parameters : projection, a,b,c parameters
- Camera response curve & vigneting
- Masks (PTGui ‘red’ masks only)
These inputs parameters are the most important settings VideoStitch imports. They define the geometric and photometric transformations of the videos.
Supported PTGui / Hugin features
Unsupported calibration features
HDR and exposure fusion
flare optimization (PTGui)
‘Image Shear’ parameter on input images : g (horizontal shear) and t (vertical shear).
- All projections that are not listed above
Creating a new calibration
To create a new calibration it is necessary that the camera array and the scene it is recording have both remained static in order to :
- solves synchronisation issues that can occur with some cameras
- avoids motion blur and rolling shutter that would also impact the calibration’s quality as they result in image distortion.
- Drop the videos you want to stitch (or use File>Open Videos). The videos will be sorted alphanumerically by VideoStitch.
- Use the “Edit > Extract stills” and check the “open calibration tool” option. Alternatively, you can use the “Calibration” button available in the “source” view.
- Images will be extracted from the videos to the project folder, and your preferred software for calibration will be launched automatically with these images.
If you are already a PTGui or Hugin user, this step should be straightforward. If not, we recommend you to check the tutorials available on PTGui or Hugin websites, there are plenty of tutorials that will get you started quickly.
Once PTGui or Hugin launches you might be requested informations about your lens and camera. These informations allow PTGui and Hugin to automatically detect how to stitch the image together. You can speed up your PTGui / Hugin workflow by using templates :
- PTGui and Hugin have “File > Save as template” and “File > Apply template” commands, that allow you to easily re-use projects.
- You can set a default project template in PTGui using
Creating re-usable calibrations
It is recommended to create some good quality template that can be used to instantly bootstrap new projects.
These are necessary to preview these synchronisation errors. Creating calibration without making sure the videos are properly synchronised is a common mistake when getting started with GoPro camera arrays.
These few guidelines should help you ensure quality calibration files :
- A single calibration file can not fit all the situations. It works best when it has been created for a specific ‘distance from the camera’. create calibrations for indoor, outdoor, or even finer intervals.
- Add control points to objects that are roughly at the same distance from the camera.
- Use videos shot with static cameras, in a bright and static environment. Especially if your camera often have synchronisation errors or rolling shutter (eg: Hero2 and Hero3 cameras). Furthermore, camera motion introduces motion blur in the image, which lowers the accuracy of the control points created in the calibration process.
- Add control points to all overlapping images
Applying a calibration
Simply drag & drop a PTgui or Hugin file on VideoStitch to instantly apply :
- cameras position and orientation (yaw, pitch, roll, viewpoint correction, shift)
- cameras response curve and vignetting
- lens profile (projection, fov, a,b,c)
- output projection
- output size
- PTGui masks
All other parameters stay unchanged (synchronization, exposure compensation … )
- Applying calibration makes it easy to review and compare different calibrations.
Editing a calibration
Editing a calibration is done directly in PTGui / Hugin. When VideoStitch extracts images, it names them based on the input indexes : input-0.jpg, input-1.jpg, … input-N.jpg
Thanks to this naming convention, you can easily re-use calibration files.
You can also refresh PTGui or Hugin with VideoStitch’s the current frames :
- “Edit > extract stills“ to chose the directory where to save extracted images.
- “Ctrl + Shift + E“ keyboard shorcut to extract images directly to that directory (without dialog window)
PTGui/Hugin will automaticaly update themselfs with the new images when they are overwritten by new ones.
When editing a calibration in PTGui/Hugin :
- Don’t use ‘Image Shear’ (g & t image parameters). This parameter is not used by VideoStitch and would influence other geometric parameters of the calibration.
- Do not change image order as this would switch camera positions in VideoStitch.
GoPro camera arrays – arrays made of consumer cameras in general – are typically hard to start all together at once and need to be accurately synchronized for good stitching results. Furthermore, it is impossible to ensure that each frame set will be recorded from all cameras simultaneously.
There are a few things to keep in mind when dealing with synchronization issues :
- Record with a high fps when possible, this gives you finer ‘grain’ when fine tuning synchronization
- Be aware that rolling shutter can be mistakenly identified as a synchronization error. This is especially true on footage that holds fast camera movements.
- Be aware of possible AV (Audio/Video) synchronization issues when using audio synchronization
One of the most useful VideoStitch feature is the ability to change synchronization on the fly and be able to instantly review the result.
- In order to review and adjust synchronization, you need to preview the stitched video.
To access the synchronization widget, use the “Edit > Synchronization” menu.
The widget offers an audio synchronization tool, as well as a direct access to the synchronization settings.
When you change one of the offset values, VideoStitch instantly updates the output preview. Values can be changed while the video is playing.
For each input, a checkbox allow to “link” values together, so that they remain synchronized.
For exemple :
Adjusting input-0 and input-1 then checking them ensures these 2 will remain synchronized.
Synchronize input-2 with input-1or input-0, then check it also so that these 3 videos remain synchronized : increasing or decreasing one of their offsets also updates the other two.
Audio based synchronization
The synchronization widget has an “Audio synchronization” tool that analyses the videos soundtracks to find out how they match and automatically adjust synchronization based on this analysis.
The “synchronize” button automatically computes and applies the result to your project.
- start point : timecode at which the algorithm will start analyzing sound
- end point : timecode at which the algorithm will stop
The default values cover the first 15 seconds of your videos. This assumes you have started all cameras and produced a loud sound pattern within 15 seconds.
If you must rely on audio to synchronize videos, you need to produce sound that is identifiable over the backgound noise, for all cameras. Make sure it is included in the sequence defined by start point and end point.
- Audio based synchronization often provides erroneous results with audio track from different cameras/microphones. It performs well if a single sound signal was plugged and spread over to all cameras in the camera array.
- Keep in mind that relying exclusively on audio to synchronize the input videos will often result in poor synchronization. Some cameras, including GoPro, may provide poor AV synchronization.
AV (Audio/Video) synchronization issues
Audio / video synchronization refers to the soundtrack of a video not being synchronized properly with the image data. A common exemple of this would be lips moving while the sound coming out of them seems to suffer from lag. The following screenshot shows illustrates expilicitely the issue :
We can clearly see that the recorded image data is out of sync by 2 frames with the audio soundtrack, which will be produce synchronization related errors in the stitched output.
While this doesn’t completely defeats the purpose of audio synchronization, it makes necessary to be able to review the stitched output and manually fine tune the synchronization’s offset values in order to get the right adjustments.
Automatic exposure compensation analyzes the input videos and computes exposure adjustments. It creates keyframes at a specified frame interval on the input exposure parameters. Exposure between each keyframe is automatically interpolated.
- Calculating exposure currently ignores and overwrites all previously computed exposures values and related keyframes.
- You should always perform automatic exposure after the input videos have been synchronised.
Exposure compensation is accessed using “Edit > Exposure compensation”
Start point : start of the sequence on which exposure compensation will process. The default value is the first frame of your project.
End point : end of the sequence on which exposure compensation will process. The default value is the last frame of your project.
Adjust every : interval between each adjustment, a keyframe will be created for each input exposure parameter. The default interval value is 2 seconds. Lower values process slower but give good results. Adjust depending on your project :
- If light condition change frequently, use a lower interval (eg: a value of 1 will generate exposure for each frame in your video ).
- Use higher interval values if lighting conditions don’t change in your videos you.
To cancel auto exposure, simply close the widget.
Check out this forum post to learn how to activate the timeline with keyframes.
Exposure compensation on 48 fps video, with keyframes generated :
Working with masks
- Only PTGui masks are currently supported, from which only the “red” (exclusive) masks are used (green masks won’t be imported).
- Hugin masks work differently, they are not supported and will not be imported by VideoStitch when you apply a Hugin calibration.
Masks allow for hiding parts of input videos so that they do not get in the final output. Use masks when you want to push the seams of an input video, so that you hide this video and reveal the other overlapping videos. In order to fine tune stitching for a specific feature in the resulting video.
Masks are static over time, you can seek any frame in the video and instantly review how the mask affects the stitched output.
- When applying a PTGui template to a VideoStitch project, the masks will automatically be imported.
- Editing and removing masks has to be done in PTGui.
If multiple masks are overlapping, no image data will appear in the output. The final stitched output will hold a “black hole” ( corresponding to what PTGui would output as alpha channel ).
Rendering the final video
To render the output video file, simply switch to the process panel :
- Set the output file name. Using the ‘browse’ button.
- Review important project settings : Blender, and output size. The maximum button will attempt to compute the maximum size.
- Hit Send to batch to add the project to add the project to the batch stitcher queue. ‘Send a copy of the project’ is an option to duplicate and save the project with a different name, which will be send to the batch queue. Your current project will remain opened in VideoStitch so that you can further edit it.
- Hit ‘process’ to start rendering the video immediately. VideoStitch Extended gives you the option to chose one or multiple CUDA GPUs.
- Set the desired video encoding.
- Chose which input should be used as audio source for the output video.
- Select the soundtrack that should be copied from one input video to the output
- Set the time parameters of the sequence to render.
- Projections and Horizontal Fov values for the output video can be changed in the process settings. It is recommended to change the projection and output FOV directly in PTGui or Hugin directly.
- Changing the fps value will affect the length and playback speed of the output video. It is recommended to keep the same framerate as the original video.
- Large panoramic videos require larger bitrate values than regular video editing.
- Fast motion video content requires higher encoding bitrates
- It is highly recommended to use output sizes that are multiples of 16. eg:
- 1920×960, 3840×1920, 4096×2048, 4800×2400, 5120×2560
Maximum resolution = 4096 pixels.
This is the default encoding. It is supported by most softwares and provides the best compression / file size compromise.
output size must be a multiple of 8. Mpeg4 doesn’t support videos that exceed 8192 pixels dimensions.
VideoStitch outputs Mpeg4 part2 (not AVC) encoded video.
doesn’t support resolutions that are multiple of 4096. (eg. 4096px, 8192px)
The MPEG2 codec is widely supported by video players. It provides an acceptable quality at the price of a high bitrate.
Very high resolution videos (over 8192 pixels) won’t be decoded properly by most video players and editing suite as such high resolution videos are not common in the industry yet.
Exporting very high resolution sequences
Video encoding for vey high resolution output can be problematic :
- when the maximum available bitrate is insufficient for the output resolution’s needs.
- when your video editing suite doesn’t decode properly very high resolution videos (Most video players won’t handle properly videos over 8k).
In such situation, you may want to fall back to an image sequence export, such as *.jpg, or *.tiff