Workspace overview

splash screen

 

VideoStitch workspace offers 4 panels, available from the upper right corner.

  • the source panel displays the input videos
  • the output panel allows you to preview the stitched result
  • the interactive panel allows you to preview the stitched video in an interactive viewer
  • the process panel is the place to process the video and adjust settings related to the stitching project
The output, interactive and process panels become available after a calibration has been imported and a video is ready to stitch.
Once a calibration has been loaded, it is not possible to change input footage directly in the GUI.

 

Timeline

The timeline allows you to play, pause, seek the videos.

timecodes

timeline

 

  1. the interactive timeline displays playback progress
  2. the first frame’s timecode, and button to set it to the current frame
  3. the last frame’s timecode,  and button to set it to the current frame
  4. the ‘working sequence’ that will be processed when rendering the output video file

  • Use the ‘start time‘ &  ‘stop time‘ buttons to set the in and out points of the working sequence.

 

Process settings

Process settings

  • Process settings apply to both the GUI preview and the output video file.
  • Using a reasonably low resolution and Linear blending improves significantly the preview playback speed.

Stitcher settings :

  • Width / Height : size of the stitched video
  • Blender : the type of blending used to stitch videos together. This affects 
    • Linear : a simple yet efficient and extremely fast blending
    • Multiband : a more complex blending type. Requires more graphics memory.

Output file settings :

  • Start time / Stop time : define the in andout points of the sequence to be processed.
  • Copy sound from input : copy the sound from one input into the processed output video file
  • Project Folder : a shortcut to open the current project folder

 

Performance monitoring

VideoStitch now shows preview playback speed – the speed at which it is stitching and displaying your project.

rendering preview at 38fps

The playback speed is automatically limited to the speed of the original videos.

With high end graphics cards, CPU may become the bottleneck for VideoStitch to perform even faster.

 

GPU monitoring


GPU_monitoring-01

 

  1. GPU Memory Usage : diplays “used MB” / “total available MB” values. The “GPU Memory Usage” includes not only VideoStitch, but also all the applications currently using the graphics card.
  2. GeForce GTX690 : the CUDA device (graphics card) currently used by VideoStitch’s GUI. The maximum memory available depends on the graphics card. If you have multiple graphics cards (CUDA devices) available on your system, you can set the one used by the GUI from the “preferences” menu.
  3. Stitched size : the size at which the VideoStitch is currently rendering. The stitched size has an impact on how much memory is used, and on the rendering speed. It can be set in the “Process” panel.

In general, the more graphics memory available for VideoStitch, the higher output resolution you will be able to reach.

See our hardware page for more informations about graphics cards

Some high end graphics cards embed multiple GPUs. These cards behave just like a mutliple GPU setup and will display as 2 different CUDA devices. The amount of memory that VideoStitch can use is the ‘per GPU’ graphics memory - eg: the GeForce GTX690 advertised as a 4Gb card, offers 2Gb per GPU and will only allow stitching what 2Gb can handle.

 

Setting up preferences

 

The preferences panel is accessed through  ‘Edit > Preferences‘

 

preferences panel

 

CUDA device : allows you to specify which graphics card the GUI should work with. The GUI only handles 1 device. You can specify different devices for processing (VideoStitch Extended only).

Calibration tool : set the path to your calibration tool of choice it should be the PTGui or Hugin executable path. This is used by VideoStitch to bootstrap calibration with these softwares.

Language : Allows for setting the GUI language. Currently, French and English are the only available translations. (You need to restart VideoStitch in order for VideoStitch to reflect language changes)

 

Keyboard shortcuts

Left & right arrows : previous & next frames

Space bar : play/pause

Ctrl+J : jump to a given frame

Shift + Home : Set the first frame

Shift + End : Set the last frame

Ctrl + T : Apply template

Ctrl + E : Extract current frames from input videos

Ctrl + Shift + E : Extract current frames without any dialog

Ctrl + F5 : reload the current project

Calibrations and templates

What’s a calibration ?

A calibration is simply a PTGui or Hugin panorama project that is used as a template in VideoStitch. what is calibration ? While these softwares can stitch together still images, VideoStitch has been designed and optimised for video processing. When loading videos in VideoStitch to create a new project, you need to provide such a calibration file that configures how the videos will be stitched together. You can either :

  • Create a new calibration
  • Apply an existing calibration

 

  • Creating a set of quality calibrations for your camera array is the key to an efficient video stitching workflow. A quality calibration can easily be re-used to bootstrap new VideoStitch projects.

 

From calibration files, VideoStitch imports :

  • output panorama
    • Global exposure & white balance
    • Output projection
  • for each input
    • Image size & crop parameters
    • Orientation and position parameters (yaw, pitch, roll, viewpoint correction, shift)
    • Lens parameters : projection, a,b,c parameters
    • Camera response curve & vigneting
    • Masks (PTGui ‘red’ masks only)

These inputs parameters are the most important settings VideoStitch imports. They define the geometric and photometric transformations of the videos.

 

Supported PTGui / Hugin features

 

Input projections

  • equirectangular

  • rectilinear

  • circular fisheye

  • full-frame fisheye

Output projections

  • equirectangular

  • rectilinear

  • full-frame fisheye

  • stereographic

Unsupported calibration features

  • Masks (Hugin)

  • Blend priority

  • HDR and exposure fusion

  • flare optimization (PTGui)

  • ‘Image Shear’ parameter on input images : g (horizontal shear) and t (vertical shear).

  • All projections that are not listed above

 

Creating a new calibration


To create a new calibration it is necessary that the camera array and the scene it is recording have both remained static in order to :

  • solves synchronisation issues that can occur with some cameras
  • avoids motion blur and rolling shutter that would also impact the calibration’s quality as they result in image distortion.

 

New calibration
To create a new calibration, you need to extract still images from the videos and stitch them together in PTGui / Hugin. The resulting PTGui / Hugin project will be the actual calibration file :

  1. Drop the videos you want to stitch (or use File>Open Videos). The videos will be sorted alphanumerically by VideoStitch.
  2. Use the “Edit > Extract stills” and check the “open calibration tool” option. Alternatively, you can use the “Calibration” button available in the “source” view.
  3. Images will be extracted from the videos to the project folder, and your preferred software for calibration will be launched automatically with these images.

If you are already a PTGui or Hugin user, this step should be straightforward. If not, we recommend you to check the tutorials available on PTGui or Hugin websites, there are plenty of tutorials that will get you started quickly.

 

Once PTGui or Hugin launches you might be requested informations about your lens and camera. These informations allow PTGui and Hugin to automatically detect how to stitch the image together. You can speed up your PTGui / Hugin workflow by using templates :

  • PTGui and Hugin have “File > Save as template” and “File > Apply template” commands, that allow you to easily re-use projects.
  • You can set a default project template in PTGui using 

 

Creating re-usable calibrations

It is recommended to create some good quality template that can be used to instantly bootstrap new projects.

These are necessary to preview these synchronisation errors. Creating calibration without making sure the videos are properly synchronised is a common mistake when getting started with GoPro camera arrays.

These few guidelines should help you ensure quality calibration files :

  • A single calibration file can not fit all the situations. It works best when it has been created for a specific ‘distance from the camera’. create calibrations for indoor, outdoor, or even finer intervals.
  • Add control points to objects that are roughly at the same distance from the camera.
  • Use videos shot with static cameras, in a bright and static environment. Especially if your camera often have synchronisation errors or rolling shutter (eg: Hero2 and Hero3 cameras). Furthermore, camera motion introduces motion blur in the image, which lowers the accuracy of the control points created in the calibration process.
  • Add control points to all overlapping images

 

 

Applying a calibration

Simply drag & drop a PTgui or Hugin file on VideoStitch to instantly apply :

  • cameras position and orientation (yaw, pitch, roll, viewpoint correction, shift)
  • cameras response curve and vignetting
  • lens profile (projection, fov, a,b,c)
  • output projection
  • output size
  • PTGui masks

All other parameters stay unchanged (synchronization, exposure compensation … )

  • Applying calibration makes it easy to review and compare different calibrations.

 

 

Editing a calibration

Editing a calibration is done directly in PTGui / Hugin. When VideoStitch extracts images, it names them based on the input indexes : input-0.jpg, input-1.jpg, … input-N.jpg

 

sample project folder

 

Thanks to this naming convention, you can easily re-use calibration files.

 

You can also refresh PTGui or Hugin with VideoStitch’s the current frames :

  • Edit > extract stills to chose the directory where to save extracted images.
  • Ctrl + Shift + E keyboard shorcut to extract images directly to that directory (without dialog window)

PTGui/Hugin will automaticaly update themselfs with the new images when they are overwritten by new ones.

 

When editing a calibration in PTGui/Hugin :

  • Don’t use ‘Image Shear’ (g & t image parameters). This parameter is not used by VideoStitch and would influence other geometric parameters of the calibration.
  • Do not change image order as this would switch camera positions in VideoStitch.

Synchronization

GoPro camera arrays – arrays made of consumer cameras in general – are typically hard to start all together at once and need to be accurately synchronized for good stitching results. Furthermore, it is impossible to ensure that each frame set will be recorded from all cameras simultaneously.

There are a few things to keep in mind when dealing with synchronization issues :

  • Record with a high fps when possible, this gives you finer ‘grain’ when fine tuning synchronization
  • Be aware that rolling shutter can be mistakenly identified as a synchronization error. This is especially true on footage that holds fast camera movements.
  • Be aware of possible AV (Audio/Video) synchronization issues when using audio synchronization

 

Adjusting synchronization

One of the most useful VideoStitch feature is the ability to change synchronization on the fly and be able to instantly review the result.

  • In order to review and adjust synchronization, you need to preview the stitched video.

 

To access the synchronization widget, use the “Edit > Synchronization” menu.

 

Synchronization widget

 

The widget offers an audio synchronization tool, as well as a direct access to the synchronization settings.

When you change one of the offset values, VideoStitch instantly updates the output preview. Values can be changed while the video is playing.

 

For each input, a checkbox allow to “link” values together, so that they remain synchronized.

Synchronisation checkboxes

 

For exemple :

Adjusting input-0 and input-1 then checking them ensures these 2 will remain synchronized.

Synchronize input-2 with input-1or input-0, then check it also so that these 3 videos remain synchronized : increasing or decreasing one of their offsets also updates the other two.

 

 

Audio based synchronization

The synchronization widget has an “Audio synchronization” tool that analyses the videos soundtracks to find out how they match and automatically adjust synchronization based on this analysis.

The “synchronize” button automatically computes and applies the result to your project.

Audio_sync_options

  • start point : timecode at which the algorithm will start analyzing sound
  • end point : timecode at which the algorithm will stop

The default values cover the first 15 seconds of your videos. This assumes you have started all cameras and produced a loud sound pattern within 15 seconds.

If you must rely on audio to synchronize videos, you need to produce sound that is identifiable over the backgound noise, for all cameras. Make sure it is included in the sequence defined by start point and end point.

  • Audio based synchronization often provides erroneous results with audio track from different cameras/microphones. It performs well if a single sound signal was plugged and spread over to all cameras in the camera array.
  • Keep in mind that relying exclusively on audio to synchronize the input videos will often result in poor synchronization. Some cameras, including GoPro, may provide poor AV synchronization.

 

AV (Audio/Video) synchronization issues

Audio / video synchronization refers to the soundtrack of a video not being synchronized properly with the image data. A common exemple of this would be lips moving while the sound coming out of them seems to suffer from lag. The following screenshot shows illustrates expilicitely the issue :

 

Bad AV sync with GoPro Hero3 cameras (Video courtesy: Roberto Mancuso)
Bad AV sync with GoPro Hero3 cameras (Video courtesy: Roberto Mancuso)

 

We can clearly see that the recorded image data is out of sync by 2 frames with the audio soundtrack, which will be produce synchronization related errors in the stitched output.

While this doesn’t completely defeats the purpose of audio synchronization, it makes necessary to be able to review the stitched output and manually fine tune the synchronization’s offset values in order to get the right adjustments.

 

Exposure compensation

 

Automatic exposure compensation analyzes the input videos and computes exposure adjustments. It creates keyframes at a specified frame interval on the input exposure parameters. Exposure between each keyframe is automatically interpolated.

 

  • Calculating exposure currently ignores and overwrites all previously computed exposures values and related keyframes.
  • You should always perform automatic exposure after the input videos have been synchronised.

 

Exposure compensation is accessed using “Edit > Exposure compensation”

 

Exposure compensation widget

 

Start point : start of the sequence on which exposure compensation will process. The default value is the first frame of your project.

End point : end of the sequence on which exposure compensation will process. The default value is the last frame of your project.

Adjust every : interval between each adjustment, a keyframe will be created for each input exposure parameter. The default interval value is 2 seconds. Lower values process slower but give good results. Adjust depending on your project :

  • If light condition change frequently, use a lower interval (eg: a value of 1 will generate exposure for each frame in your video ).
  • Use higher interval values if lighting conditions don’t change in your videos you.

To cancel auto exposure, simply close the widget.

 

The timeline with keyframes is currently a beta feature and disabled by default.
Check out this forum post to learn how to activate the timeline with keyframes.

 

Exposure compensation on 48 fps video, with keyframes generated :

Working with masks

  • Only PTGui masks are currently supported, from which only the “red” (exclusive) masks are used (green masks won’t be imported).
  • Hugin masks work differently, they are not supported and will not be imported by VideoStitch when you apply a Hugin calibration.

Masks allow for hiding parts of input videos so that they do not get in the final output. Use masks when you want to push the seams of an input video, so that you hide this video and reveal the other overlapping videos. In order to fine tune stitching for a specific feature in the resulting video.

 

 

Masks are static over time, you can seek any frame in the video and instantly review how the mask affects the stitched output.

  • When applying a PTGui template to a VideoStitch project, the masks will automatically be imported.
  • Editing and removing masks has to be done in PTGui.

If multiple masks are overlapping, no image data will appear in the output. The final stitched output will hold a “black hole” ( corresponding to what PTGui would output as alpha channel ).

Rendering the final video

Process panel

 

To render the output video file, simply switch to the process panel :

VideoStitch-1.2.0-process

 

  1. Set the output file name. Using the ‘browse’ button.
  2. Review important project settings : Blender, and output size. The maximum button will attempt to compute the maximum size.
  3. Hit Send to batch to add the project to add the project to the batch stitcher queue. ‘Send a copy of the project’ is an option to duplicate and save the project with a different name, which will be send to the batch queue. Your current project will remain opened in VideoStitch so that you can further edit it.
  4. Hit ‘process’ to start rendering the video immediately. VideoStitch Extended gives you the option to chose one or multiple CUDA GPUs.
  5. Set the desired video encoding.
  6. Chose which input should be used as audio source for the output video.
  7. Select the soundtrack that should be copied from one input video to the output
  8. Set the time parameters of the sequence to render.
  9. Projections and Horizontal Fov values for the output video can be changed in the process settings. It is recommended to change the projection and output FOV directly in PTGui or Hugin directly.

  • Changing the fps value will affect the length and playback speed of the output video. It is recommended to keep the same framerate as the original video.

 

Encoding settings

  • Large panoramic videos require larger bitrate values than regular video editing.
  • Fast motion video content requires higher encoding bitrates
  • It is highly recommended to use output sizes that are multiples of 16. eg:
    • 1920×960, 3840×1920, 4096×2048, 4800×2400, 5120×2560

 

Video codecs

 

H264

Maximum resolution = 4096 pixels.

This is the default encoding. It is supported by most softwares and provides the best compression / file size  compromise.

 

MPEG4

output size must be a multiple of 8. Mpeg4 doesn’t support videos that exceed 8192 pixels dimensions.

VideoStitch outputs Mpeg4 part2 (not AVC) encoded video.

 

MPEG2

doesn’t support resolutions that are multiple of 4096. (eg. 4096px, 8192px)

The MPEG2 codec is widely supported by video players. It provides an acceptable quality at the price of a high bitrate.

Very high resolution videos (over 8192 pixels) won’t be decoded properly by most video players and editing suite as such high resolution videos are not common in the industry yet.

 

Exporting very high resolution sequences

Video encoding for vey high resolution output can be problematic :

  • when the maximum available bitrate is insufficient for the output resolution’s needs.
  • when your video editing suite doesn’t decode properly very high resolution videos (Most video players won’t handle properly videos over 8k).

In such situation, you may want to fall back to an image sequence export, such as *.jpg, or *.tiff

Using the batch stitcher

The batch stitcher is available since VideoStitch 1.2.0 allows you to prepare multiple VideoStitch projects and process them all at once later.

 

There are 3 ways to add projects to the batch stitcher :

  • From the VideoStitch, click on ‘send to batch’.
    • If you chose “send a copy of the project”, you can save a copy of the project. The saved copy will be sent to batch, so that you can continue editing your project. This is especially useful if you need to process the same sequence with multiple calibrations for advanced post-processing in a 3rd party software.
  • Directly from the batch stitcher : Simply drop a project onto the batch stitcher, or chose ‘File>Add projects’
save a copy and send it to batch
by checking “send a copy of the project”, you can save and process a copy of your current project.

 

When using the batch stitcher, it is highly recommended to close projects that are already opened in VideoStitch. We designed VideoStitch to use the best balance of system ressources. However video stitching is a ressource intensive task, keep in mind that editing a project and stitching in batch will perform rather slow on some systems.

[info]

VideoStitch Extended users can configure the batch to run on one or multiple separate GPUs before processing.

In this case, be aware that the CPU might become your system’s the system’s bottleneck. Learn more about CPU usage and optimizing your configuration for VideoStitch on this article.

[/info]

 

Right click on a project to access various options such as removing, resetting or editing the projects.

right click on a project to access the batch options
right click on a project to access the batch options

 

Publishing

Publishing interactive videos

Publishing for interactive 360 video players usually requires you to produce a full (360×180) equirectangular video.

VideoStitch doesn’t offer any built-in tool for web publishing and focuses on producing stitching videos at this point.

The following solutions have been around for some time, can be scripted, and entirely customized :

Krpano : a versatile panorama player. It handles 360 video just perfectly and is the perfect tool for all, make sure to check out its forums.

Lucid viewer :  has a flash viewer as well as an Android and iOS app ! It doesn’t get as much coverage as krpano, but it’s been around for a while. Take a look at it !

Ryubin panorama laboratory : pioneer of flash 360 video publishing, Ryubin panorama laboratory has developed its own Flash Panorama player since 2007 ! you’ll also find a handfull of utilities for converting/remapping panoramic images on its website.

 

Encoding for the web :

VideoStitch’s output videos play very well with specialized, free and open source tools to batch encode videos for the web.

Handbrake : Our favorite, it offers a wide range of advanced settings that will fit all your needs !

Miro Video Encoder : Very convenient tool, but you’ll have to survive giving up control over encoding settings.