VideoStitch Studio overview

Workflow overview

You can find on our Youtube channel a playlist of video tutorials for VideoStitch Studio. The first one will give you an overview of the software’s workflow:

 

Workspace overview

Studio splash screen

VideoStitch Studio workspace offers 4 panels, available from the upper right corner.

  • the Source panel displays the input videos
  • the Output panel allows you to preview the stitched result
  • the Interactive panel allows you to preview the stitched video in an interactive viewer
  • the Process panel is the place to process the video and adjust settings related to the stitching project
The Output, Interactive and Process panels become available after a video or a project has been imported.

Timeline

timecodes

 

 

The timeline allows you to play, pause, seek the videos.

 

The interactive timeline displays playback progress. The light grey area represents the sequence that will be processed by the algorithms (synchronization, calibration, expoure, rendering the output file…).

You might want to use different sequences while processing your project: the sequence used for synchronising the videos doesn’t need to match the one used for rendering.

Below the play/pause button, you can find:

  • On the right: The first frame’s timecode, and a button to set it to the current frame
  • On the left: The last frame’s timecode,  and a button to set it to the current frame

 

Studio timeline

Process settings

Process settings apply to both the GUI preview and the output video file. You may want to use a reasonably low resolution and Linear blending mode  to improve the preview playback speed, and switch to higher resolution and blending quality just before rendering the final file.

Project settings

Output file path: path used for the whole project. By default, your project files (.ptv/.ptvb), image snapshots and output videos will be saved in that folder.

Stitcher settings :

  • Blender: the type of blending used to merge videos together.
    • Linear : a simple yet efficient and extremely fast blending.
    • Multi-band : a more complex blending type. Requires more graphics memory.
  • Width / Height: resolution of the output video

Output file settings :

  • Start time / Stop time: define the timecode of the first and last frame of the sequence to be processed.
  • Copy audio from: pick which audio input you want to use into the processed output video file

 

Performance monitoring

VideoStitch Studio displays preview playback speed – the speed at which it is stitching and displaying your project.

Studio performances display

The playback speed is automatically limited to the speed of the original videos.

With high end graphics cards, CPU may become the bottleneck for VideoStitch Studio to perform even faster: the encoding / decoding of the input and output videos are done on the CPU while the stitching is done on the GPU.

 

GPU monitoring

GPU_monitoring-01

 1) GPU Memory Usage : displays “used MB” / “total available MB” values. It displays not only VideoStitch Studio’s GPU Memory usage , but the GPU Memory usage of all the applications currently using the graphics card. The maximum memory available depends on the graphics card.

2) GeForce GTX690 : the CUDA device (graphics card) currently used by VideoStitch Studio. If you have multiple graphics cards (CUDA devices) available on your system, you can set the one(s) to be used from the “Preferences” menu (see “Setting up preferences” just below).

3) Stitched size : the size currently rendered by VideoStitch Studio. The stitched size has an impact on how much memory is used, and on the rendering speed. It can be set in the “Process” panel.

In general, the more graphics memory is available for VideoStitch Studio, the higher output resolution you will be able to reach. See our hardware page for more informations about graphics cards
Some high end graphics cards embed multiple GPUs. These cards behave just like a multiple GPU setup and will be displayed in the preference menu as 2 different CUDA devices. The amount of memory that VideoStitch can use is the ‘per GPU’ graphics memory – eg: the GeForce GTX690 advertised as a 4Gb card, offers 2Gb per GPU and will only allow stitching what 2Gb can handle.

 

Setting up preferences

The preferences panel is accessed through  “Edit > Preferences”

Studio preferences panel

CUDA Devices : allows you to specify which graphics card(s) VideoStitch Studio should work with.

Calibration Tool : if you are using an external calibration tool (PTGui or Hugin) to improve the stitching results, set the path to your calibration tool.

Language : Sets the GUI language. Currently, French and English are the only available translations. (You need to restart VideoStitch Studio to reflect language changes)

Check for beta updates:  if you want to be the first notified when we release a new version of the software.

Beta releases might not be as stable as the final releases. You will have more features but maybe also more errors.

 

Keyboard shortcuts

You can find a complete list of keyboard shortcuts in Help > Shortcuts.

Left & right arrows : previous & next frames

Space bar : play/pause

Ctrl+J : jump to a given frame

Shift + Home : Set the first frame

Shift + End : Set the last frame

Ctrl + T : Apply template

Ctrl + E : Extract current frames from input videos

Ctrl + Shift + E : Extract current frames to …

Ctrl + F5 : reload the current project

Synchronization

GoPro camera arrays – arrays made of consumer cameras in general – are typically hard to start all together at once and need to be accurately synchronized for good stitching results. Furthermore, it is impossible to ensure that each frame set will be recorded from all cameras simultaneously. Synchronization is the first step in the stitching process.

There are a few things to keep in mind when dealing with synchronization issues :

  • Record with a high fps when possible, this gives you finer ‘grain’ when fine tuning synchronization
  • Be aware that rolling shutter can be mistakenly identified as a synchronization error. This is especially true on footage that holds fast camera movements.
  • Be aware of possible AV (Audio/Video) synchronization issues when using audio synchronization

To access the synchronization widget, use the “Window > Synchronization” menu. The widget offers an audio, motion and flash synchronization tools, as well as a direct access to the synchronization settings.

Studio synchronization

You can find a step-by-step tutorial of VideoStitch Studio synchronization on our Youtube channel:


 

Audio-based synchronization

The synchronization widget has an “Audio synchronization” tool that analyses the videos soundtracks to find out how they match and automatically adjust synchronization based on this analysis.

If you must rely on audio to synchronize videos, you need to produce sound that is identifiable over the background noise, for all cameras. You can for instance clap your hands. This algorithm is not recommended if you are in a noisy environment (concert, …).

The “synchronize” button automatically computes and applies the result to your project.

  • Start point : timecode at which the algorithm will start analyzing sound
  • End point : timecode at which the algorithm will stop

  • Audio based synchronization often provides erroneous results with audio track from different cameras/microphones. It performs well if a single sound signal was plugged and spread over to all cameras in the camera array.
  • Keep in mind that relying exclusively on audio to synchronize the input videos will often result in poor synchronization. Some cameras, including GoPro, may provide poor AV synchronization.

Motion-based synchronization

The motion algorithm looks for a motion in all your input videos, and then align the start and end points of this movement. You can for instance sharply give a spin to your rig at the beginning of your video.

Again, make sure to select start and end points so that the processed sequence includes your movement.

 

Flash-based synchronization

The light from the flash needs to be visible from all your cameras viewpoint. You can for instance add a bag on top of you rig and quickly remove it, light on the room you are working in, or use synchronized professional flashes.

 

AV (Audio/Video) synchronization issues

Audio / video synchronization refers to the soundtrack of a video not being synchronized properly with the image data. A common example of this would be lips moving while the sound coming out of them seems to suffer from lag. The following screenshot shows illustrates explicitly the issue :

 

Bad AV sync with GoPro Hero3 cameras (Video courtesy: Roberto Mancuso)
Bad AV sync with GoPro Hero3 cameras (Video courtesy: Roberto Mancuso)

 

We can clearly see that the recorded image data is out of sync by 2 frames with the audio soundtrack, which will be produce synchronization related errors in the stitched output.

While this doesn’t completely defeats the purpose of audio synchronization, it makes necessary to be able to review the stitched output and manually fine tune the synchronization’s offset values in order to get the right adjustments.

 Tips & tricks: adjust synchronization on the fly

One of the most useful VideoStitch feature is the ability to change synchronization on the fly and be able to instantly review the result. When you change one of the offset values, VideoStitch instantly updates the output preview. Values can be changed while the video is playing.

For each input, a check-box allow to “link” values together, so that they remain synchronized.

Synchronization check-boxes

For example :

Adjusting input-0 and input-1 then checking them ensures these 2 will remain synchronized.

Synchronize input-2 with input-1or input-0, then check it also so that these 3 videos remain synchronized : increasing or decreasing one of their offsets also updates the other two.

Calibrations and templates

What’s a calibration ?

A calibration is a set of parameters that define how the input videos relates with each others, the input camera parameters, …

what is calibration ?

VideoStitch Studio provides you with an automatic calibration tool, that will optimize both your lenses’ settings (vignetting, …) and the camera rig setup.

You will not get a good calibration if your videos are not perfectly synchronized. The first step is always to synchronize your input videos (see the Synchronization part above).

Automatic calibration

VideoStitch Studio comes with calibration algorithms, to help you create a panoramic video. There are two kinds of calibration:

  • Geometric calibration: computes the geometric parameters to stitch and merge the videos together in one single output panorama (yaw, pitch, roll, …)
  • Photometric calibration: optimizes your lens settings (among others vignetting) so that your exposure looks even in the whole panorama

 

Geometric calibration

Our geometric calibration algorithm will try to find control points in your input videos and to match them. Once this step is done, we can merge your images together.

You can find a step-by step tutorial on how to use that feature on our Youtube channel:

Pre-processing

To get an accurate panorama result, all your cameras need to have the same settings:

  • Please do not use the camera zoom feature, or the GoPro4 Superview mode
  • Choose the correct lens parameters when calibrating (for Gopros, the FOV is usually 120)
  • If you have circular fish-eye lenses, do not forget to crop the input images

Applying the calibration

Our algorithm processes a couple of still images from your inputs in order to find the geometry parameters to match and merge them. It then uses these results to merge the whole input videos.

The algorithm is optimized on scenes in your video sequences that satisfy the following conditions:

  • The camera rig and the scene it is recording are static, to solve synchronization issues and avoid motion blur and rolling shutter (those introduce image distortion).
  • There are enough details in all the images: if in the overlap zone between two cameras there is only a piece of sky, ocean, … then the algorithm will not be able to find control points
  • There are no (or really few, and not in the overlap zones) close objects. Objects closer than two meters away (approximately, depends of your rig) will introduce errors in the calibration

To specify which scenes the algorithm should use you can:

  1. Use the fully automatic mode, clicking on the “Add” button so that scenes are picked automatically from you input videos
  2. Manually add some frames (add the current frame in the timeline) if you think the scene satisfies the above conditions

Studio add calibration frames

Then, click on “1 – Calibrate Geometry” to launch the calibration.

If the automatic geometric calibration doesn’t work out-of-the box, you can also take a look at our FAQ

Photometric calibration

Since VideoStitch Studio v2.1, you can also apply a “photometric” calibration. It will computes cameras’ response curve and vignetting, to improve your output quality.

Vignetting is a lens distortion effect that affects all the optical lenses. This effect is more visible close to the images’ edges, that tends to be darker than the center:

Vignetting effect

Using the input cameras’ response curve and vignetting, VideoStitch Studio is able to blend more smoothly the images. When applying exposure compensation, it will also improve the color correction by minimizing color and exposure differences between the inputs.

In VideoStitch Studio interface, check the box “photometric calibration parameters”, and then click on “2 – Calibrate photometry”. You will see the camera response curve and vignette coefficients appear.

Studio photometric calibration

 

Manual calibration

You may sometimes want to improve the automatic calibration (when your scene doesn’t have enough details, or has close objects for instance). VideoStitch Studio is compatible with PTGui and Hugin software solutions, that can stitch together still images.

Using an external calibration

In “Edit > Preferences” (keyboard shortcut: Ctrl + ,), enter your calibration tool path: for instance “C:/Program Files/PTGui/PTGui.exe”.

Then, go to “Window > Calibration” and click on “Calibration from a file”.

You can find a tutorial on how to use VideoStitch with PTGui here (process with Hugin would be similar):

Don’t use ‘Image Shear’ (g & t image parameters). These parameters are not used by VideoStitch Studio and influence other geometric parameters.
You can find a list of supported parameters in our FAQ.

If you already have a calibration template created by PTGui or Hugin

Drag & drop a PTGui or Hugin file on VideoStitch Studio, or from the “Calibration from a file” tab click on “Browse calibration“.

Editing a calibration

Editing a calibration is done directly in PTGui / Hugin.  You can also update your previous calibration in PTGui or Hugin (more accurate calibration, frames with a better calibration scene) directly from VideoStitch Studio. To extract the current frames, just click on “Edit > Extract stills to” (keyboard shortcut: Ctrl + Shift + E). Pick the same directory you were using before so that PTGui / Hugin can detect the inputs images have changed.

If you want to create a new calibration from scratch

Click on “New calibration” and select where you want to save your calibration template. You will then be prompted to enter your camera settings, and you are ready to start your calibration. If you are already a PTGui or Hugin user, this step should be straightforward. If not, we recommend you to check the tutorials available on our website for PTGui, or directly on PTGui or Hugin websites. There are plenty of tutorials that will get you started quickly.

 

Creating re-usable calibrations

You can create some good quality templates that can be used to instantly bootstrap new projects. These templates can be used to preview synchronization errors (you will not get any good calibration if your videos are not correctly synchronized).

These few guidelines should help you ensure quality calibration files :

  • A single calibration file can not fit all the situations. It works best when it has been created for a specific ‘distance from the camera’. create calibrations for indoor, outdoor, or even finer intervals.
  • Add control points to objects that are roughly at the same distance from the camera.
  • Use videos shot with static cameras, in a bright and static environment.
  • Add control points to all overlapping images

 

Exposure compensation

After exposure compensation
Hero2 raw exposure

 

Automatic exposure compensation analyzes the input videos and computes exposure adjustments. It creates keyframes at a specified frame interval on the input exposure parameters. Exposure between each keyframe is automatically interpolated.

You can find a step-by-step tutorial of VideoStitch Studio synchronization on our Youtube channel:

Calculating exposure currently ignores and overwrites all previously computed exposures values and related keyframes.

 

Algorithm parameters

Exposure compensation is accessed using “Window > Exposure compensation”.

Start point : start of the sequence on which exposure compensation will process. The default value is the first frame of your project.

End point : end of the sequence on which exposure compensation will process. The default value is the last frame of your project.

Adjust every : interval between each adjustment, a keyframe will be created for each input exposure parameter. Lower values process slower but give better results:

  • If light condition change frequently in your project, use a lower interval (eg: a value of 1 will generate exposure for each frame in your video).
  • Use higher interval values if lighting conditions don’t change in your videos.

Adjust sequence / Adjust here: adjust on the sequence between start and end points, or just on the current frame.

Exposure compensation on 48 fps video, with keyframes generated :

Stabilization and orientation

Stabilization is useful if your camera has shaken during the video shooting (typically when the camera is in movement). It will smooth down the vertical bumping. Orientation adjustment will help you to flatten the horizon.

You can find a step-by-step tutorial on our Youtube channel:


 

Stabilization

There is an automatic algorithm, that you can then improve manually thanks to the timeline. This algorithms corrects:

  • yaw
  • pitch
  • roll

Just set the start and end points of the sequence you want to process and click on “Process“.

Orientation

You can manually edit the video orientation from the output tab, by clicking on “Edit orientation“.

Studio edit orientation
A grid will be displayed to help you align the horizon at a specific frame. Just grab a point of the image with your mouse and move it!

Working with masks

  • Only PTGui masks are currently supported, from which only the “red” (exclusive) masks are used (green masks won’t be imported).
  • Hugin masks work differently, they are not supported and will not be imported by VideoStitch when you apply a Hugin calibration.

Masks allow for hiding parts of input videos so that they do not get in the final output. Use masks when you want to push the seams of an input video, so that you hide this video and reveal the other overlapping videos. In order to fine tune stitching for a specific feature in the resulting video.

 

 

Masks are static over time, you can seek any frame in the video and instantly review how the mask affects the stitched output.

  • When applying a PTGui template to a VideoStitch project, the masks will automatically be imported.
  • Editing and removing masks has to be done in PTGui.

If multiple masks are overlapping, no image data will appear in the output. The final stitched output will hold a “black hole” ( corresponding to what PTGui would output as alpha channel ).

Rendering the final video

Process panel

To render the output video file, simply switch to the process panel :

VideoStitch-1.2.0-process

  1. Set the output file name. Using the ‘browse’ button.
  2. Review important project settings : Blender, video start and end time and output size. The maximum button will attempt to compute the maximum size.
  3. Then decide how you want to process the video:
    1.  Hit ‘Process Now’ to start rendering the video immediately. You can decide to render on one or multiple CUDA GPUs.
    2. ‘Send to batch‘ adds the project to the batch stitcher queue. ‘Send a copy of the project’ is an option to duplicate and save the project with a different name, which will be sent to the batch queue. Your current project will remain opened in VideoStitch so that you can further edit it.
  4. Set the desired video encoding parameters.
  5. Select the soundtrack that should be copied from one input video to the output
  6. Set the projection type and Horizontal Fov values for the output video. If you used an external calibration tool, it is recommended to change the projection and output FOV directly in it.
Changing the fps value will affect the length and playback speed of the output video. It is not compatible with audio as we do not provide audio resampling yet. It is recommended to keep the same framerate as the original video.

 

Encoding settings

  • Large panoramic videos and fast motion video content require a higher bitrate
  • It is highly recommended to use output sizes that are multiples of 16. eg: 1920×960, 3840×1920, 4096×2048, 4800×2400, 5120×2560
  • To encode your output video specifically for web and mobile devices, please check-out this blog post: http://www.video-stitch.com/encoding-workflow/

 

Video codecs

H264: this is the default encoding. It is supported by most softwares and provides the best compression / file size  compromise. Maximum resolution = 4096 pixels.

MPEG4: Mpeg4 part2 (not AVC) encoded video. Output size must be a multiple of 8. Maximum resolution = 8192 pixels.

MPEG2: widely supported by video players, it provides an acceptable quality at the price of a high bitrate. Doesn’t support resolutions that are multiple of 4096. (eg. 4096px, 8192px). Very high resolution videos (over 8192 pixels) won’t be decoded properly by most video players and editing suite at such high resolution videos are not common in the industry yet.

Exporting very high resolution sequences

Video encoding for very high resolution output can be problematic :

  • when the maximum available bitrate is insufficient for the output resolution’s needs.
  • when your video editing suite doesn’t decode properly very high resolution videos (Most video players won’t handle properly videos over 8k).

In such situation, you may want to fall back to an image sequence export, such as *.jpg, or *.tiff
 

Using the batch stitcher

The batch stitcher is available since VideoStitch 1.2.0 and allows you to prepare multiple VideoStitch projects and process them all at once later.

To add projects to the batch stitcher, you can:

  • From VideoStitch Studio, click on ‘send to batch’. If you chose “send a copy of the project”, you can save a copy of the project. The saved copy will be sent to batch, so that you can continue editing your project. This is especially useful if you need to process the same sequence with multiple calibrations for advanced post-processing in a 3rd party software.
  • Directly drop a project onto the batch stitcher, or chose ‘File>Add projects’

save a copy and send it to batch

When using the batch stitcher, it is highly recommended to close projects that are already opened in VideoStitch Studio. We designed VideoStitch Studio to use the best balance of system resources. However video stitching is a resource intensive task, keep in mind that editing a project and stitching in batch will perform rather slow on some systems.

You can configure the batch to run on one or multiple separate GPUs before processing. In this case, be aware that the CPU might become your system’s the system’s bottleneck. Learn more about CPU usage and optimizing your configuration for VideoStitch on this article.

Right click on a project to access various options such as removing, resetting or editing the projects.

right click on a project to access the batch options