VideoStitch Studio overview
You can find on our Youtube channel a playlist of video tutorials for VideoStitch Studio. The first one will give you an overview of the software’s workflow:
VideoStitch Studio workspace offers 4 panels, available from the upper right corner.
- the Source panel displays the input videos
- the Output panel allows you to preview the stitched result
- the Interactive panel allows you to preview the stitched video in an interactive viewer
- the Process panel is the place to process the video and adjust settings related to the stitching project
The timeline allows you to play, pause, seek the videos.
The interactive timeline displays playback progress. The light grey area represents the sequence that will be processed by the algorithms (synchronization, calibration, expoure, rendering the output file…).
Below the play/pause button, you can find:
- On the right: The first frame’s timecode, and a button to set it to the current frame
- On the left: The last frame’s timecode, and a button to set it to the current frame
Output file path: path used for the whole project. By default, your project files (.ptv/.ptvb), image snapshots and output videos will be saved in that folder.
Stitcher settings :
- Blender: the type of blending used to merge videos together.
- Linear : a simple yet efficient and extremely fast blending.
- Multi-band : a more complex blending type. Requires more graphics memory.
- Width / Height: resolution of the output video
Output file settings :
- Start time / Stop time: define the timecode of the first and last frame of the sequence to be processed.
- Copy audio from: pick which audio input you want to use into the processed output video file
VideoStitch Studio displays preview playback speed – the speed at which it is stitching and displaying your project.
The playback speed is automatically limited to the speed of the original videos.
1) GPU Memory Usage : displays “used MB” / “total available MB” values. It displays not only VideoStitch Studio’s GPU Memory usage , but the GPU Memory usage of all the applications currently using the graphics card. The maximum memory available depends on the graphics card.
2) GeForce GTX690 : the CUDA device (graphics card) currently used by VideoStitch Studio. If you have multiple graphics cards (CUDA devices) available on your system, you can set the one(s) to be used from the “Preferences” menu (see “Setting up preferences” just below).
3) Stitched size : the size currently rendered by VideoStitch Studio. The stitched size has an impact on how much memory is used, and on the rendering speed. It can be set in the “Process” panel.
Setting up preferences
The preferences panel is accessed through “Edit > Preferences”
CUDA Devices : allows you to specify which graphics card(s) VideoStitch Studio should work with.
Calibration Tool : if you are using an external calibration tool (PTGui or Hugin) to improve the stitching results, set the path to your calibration tool.
Language : Sets the GUI language. Currently, French and English are the only available translations. (You need to restart VideoStitch Studio to reflect language changes)
Check for beta updates: if you want to be the first notified when we release a new version of the software.
You can find a complete list of keyboard shortcuts in Help > Shortcuts.
Left & right arrows : previous & next frames
Space bar : play/pause
Ctrl+J : jump to a given frame
Shift + Home : Set the first frame
Shift + End : Set the last frame
Ctrl + T : Apply template
Ctrl + E : Extract current frames from input videos
Ctrl + Shift + E : Extract current frames to …
Ctrl + F5 : reload the current project
GoPro camera arrays – arrays made of consumer cameras in general – are typically hard to start all together at once and need to be accurately synchronized for good stitching results. Furthermore, it is impossible to ensure that each frame set will be recorded from all cameras simultaneously. Synchronization is the first step in the stitching process.
- Record with a high fps when possible, this gives you finer ‘grain’ when fine tuning synchronization
- Be aware that rolling shutter can be mistakenly identified as a synchronization error. This is especially true on footage that holds fast camera movements.
- Be aware of possible AV (Audio/Video) synchronization issues when using audio synchronization
To access the synchronization widget, use the “Window > Synchronization” menu. The widget offers an audio, motion and flash synchronization tools, as well as a direct access to the synchronization settings.
You can find a step-by-step tutorial of VideoStitch Studio synchronization on our Youtube channel:
The synchronization widget has an “Audio synchronization” tool that analyses the videos soundtracks to find out how they match and automatically adjust synchronization based on this analysis.
If you must rely on audio to synchronize videos, you need to produce sound that is identifiable over the background noise, for all cameras. You can for instance clap your hands. This algorithm is not recommended if you are in a noisy environment (concert, …).
The “synchronize” button automatically computes and applies the result to your project.
- Start point : timecode at which the algorithm will start analyzing sound
- End point : timecode at which the algorithm will stop
- Audio based synchronization often provides erroneous results with audio track from different cameras/microphones. It performs well if a single sound signal was plugged and spread over to all cameras in the camera array.
- Keep in mind that relying exclusively on audio to synchronize the input videos will often result in poor synchronization. Some cameras, including GoPro, may provide poor AV synchronization.
The motion algorithm looks for a motion in all your input videos, and then align the start and end points of this movement. You can for instance sharply give a spin to your rig at the beginning of your video.
Again, make sure to select start and end points so that the processed sequence includes your movement.
The light from the flash needs to be visible from all your cameras viewpoint. You can for instance add a bag on top of you rig and quickly remove it, light on the room you are working in, or use synchronized professional flashes.
AV (Audio/Video) synchronization issues
Audio / video synchronization refers to the soundtrack of a video not being synchronized properly with the image data. A common example of this would be lips moving while the sound coming out of them seems to suffer from lag. The following screenshot shows illustrates explicitly the issue :
We can clearly see that the recorded image data is out of sync by 2 frames with the audio soundtrack, which will be produce synchronization related errors in the stitched output.
While this doesn’t completely defeats the purpose of audio synchronization, it makes necessary to be able to review the stitched output and manually fine tune the synchronization’s offset values in order to get the right adjustments.
Tips & tricks: adjust synchronization on the fly
One of the most useful VideoStitch feature is the ability to change synchronization on the fly and be able to instantly review the result. When you change one of the offset values, VideoStitch instantly updates the output preview. Values can be changed while the video is playing.
For each input, a check-box allow to “link” values together, so that they remain synchronized.
For example :
Adjusting input-0 and input-1 then checking them ensures these 2 will remain synchronized.
Synchronize input-2 with input-1or input-0, then check it also so that these 3 videos remain synchronized : increasing or decreasing one of their offsets also updates the other two.
Calibrations and templates
What’s a calibration ?
A calibration is a set of parameters that define how the input videos relates with each others, the input camera parameters, …
VideoStitch Studio provides you with an automatic calibration tool, that will optimize both your lenses’ settings (vignetting, …) and the camera rig setup.
VideoStitch Studio comes with calibration algorithms, to help you create a panoramic video. There are two kinds of calibration:
- Geometric calibration: computes the geometric parameters to stitch and merge the videos together in one single output panorama (yaw, pitch, roll, …)
- Photometric calibration: optimizes your lens settings (among others vignetting) so that your exposure looks even in the whole panorama
Our geometric calibration algorithm will try to find control points in your input videos and to match them. Once this step is done, we can merge your images together.
You can find a step-by step tutorial on how to use that feature on our Youtube channel:
To get an accurate panorama result, all your cameras need to have the same settings:
- Please do not use the camera zoom feature, or the GoPro4 Superview mode
- Choose the correct lens parameters when calibrating (for Gopros, the FOV is usually 120)
- If you have circular fish-eye lenses, do not forget to crop the input images
Applying the calibration
Our algorithm processes a couple of still images from your inputs in order to find the geometry parameters to match and merge them. It then uses these results to merge the whole input videos.
The algorithm is optimized on scenes in your video sequences that satisfy the following conditions:
- The camera rig and the scene it is recording are static, to solve synchronization issues and avoid motion blur and rolling shutter (those introduce image distortion).
- There are enough details in all the images: if in the overlap zone between two cameras there is only a piece of sky, ocean, … then the algorithm will not be able to find control points
- There are no (or really few, and not in the overlap zones) close objects. Objects closer than two meters away (approximately, depends of your rig) will introduce errors in the calibration
To specify which scenes the algorithm should use you can:
- Use the fully automatic mode, clicking on the “Add” button so that scenes are picked automatically from you input videos
- Manually add some frames (add the current frame in the timeline) if you think the scene satisfies the above conditions
Then, click on “1 – Calibrate Geometry” to launch the calibration.
Since VideoStitch Studio v2.1, you can also apply a “photometric” calibration. It will computes cameras’ response curve and vignetting, to improve your output quality.
Vignetting is a lens distortion effect that affects all the optical lenses. This effect is more visible close to the images’ edges, that tends to be darker than the center:
Using the input cameras’ response curve and vignetting, VideoStitch Studio is able to blend more smoothly the images. When applying exposure compensation, it will also improve the color correction by minimizing color and exposure differences between the inputs.
In VideoStitch Studio interface, check the box “photometric calibration parameters”, and then click on “2 – Calibrate photometry”. You will see the camera response curve and vignette coefficients appear.
You may sometimes want to improve the automatic calibration (when your scene doesn’t have enough details, or has close objects for instance). VideoStitch Studio is compatible with PTGui and Hugin software solutions, that can stitch together still images.
Using an external calibration
In “Edit > Preferences” (keyboard shortcut: Ctrl + ,), enter your calibration tool path: for instance “C:/Program Files/PTGui/PTGui.exe”.
Then, go to “Window > Calibration” and click on “Calibration from a file”.
You can find a tutorial on how to use VideoStitch with PTGui here (process with Hugin would be similar):
If you already have a calibration template created by PTGui or Hugin
Drag & drop a PTGui or Hugin file on VideoStitch Studio, or from the “Calibration from a file” tab click on “Browse calibration“.
Editing a calibration
Editing a calibration is done directly in PTGui / Hugin. You can also update your previous calibration in PTGui or Hugin (more accurate calibration, frames with a better calibration scene) directly from VideoStitch Studio. To extract the current frames, just click on “Edit > Extract stills to” (keyboard shortcut: Ctrl + Shift + E). Pick the same directory you were using before so that PTGui / Hugin can detect the inputs images have changed.
If you want to create a new calibration from scratch
Click on “New calibration” and select where you want to save your calibration template. You will then be prompted to enter your camera settings, and you are ready to start your calibration. If you are already a PTGui or Hugin user, this step should be straightforward. If not, we recommend you to check the tutorials available on our website for PTGui, or directly on PTGui or Hugin websites. There are plenty of tutorials that will get you started quickly.
Creating re-usable calibrations
You can create some good quality templates that can be used to instantly bootstrap new projects. These templates can be used to preview synchronization errors (you will not get any good calibration if your videos are not correctly synchronized).
These few guidelines should help you ensure quality calibration files :
- A single calibration file can not fit all the situations. It works best when it has been created for a specific ‘distance from the camera’. create calibrations for indoor, outdoor, or even finer intervals.
- Add control points to objects that are roughly at the same distance from the camera.
- Use videos shot with static cameras, in a bright and static environment.
- Add control points to all overlapping images
Automatic exposure compensation analyzes the input videos and computes exposure adjustments. It creates keyframes at a specified frame interval on the input exposure parameters. Exposure between each keyframe is automatically interpolated.
You can find a step-by-step tutorial of VideoStitch Studio synchronization on our Youtube channel:
Exposure compensation is accessed using “Window > Exposure compensation”.
Start point : start of the sequence on which exposure compensation will process. The default value is the first frame of your project.
End point : end of the sequence on which exposure compensation will process. The default value is the last frame of your project.
Adjust every : interval between each adjustment, a keyframe will be created for each input exposure parameter. Lower values process slower but give better results:
- If light condition change frequently in your project, use a lower interval (eg: a value of 1 will generate exposure for each frame in your video).
- Use higher interval values if lighting conditions don’t change in your videos.
Adjust sequence / Adjust here: adjust on the sequence between start and end points, or just on the current frame.
Exposure compensation on 48 fps video, with keyframes generated :
Stabilization and orientation
Stabilization is useful if your camera has shaken during the video shooting (typically when the camera is in movement). It will smooth down the vertical bumping. Orientation adjustment will help you to flatten the horizon.
You can find a step-by-step tutorial on our Youtube channel:
There is an automatic algorithm, that you can then improve manually thanks to the timeline. This algorithms corrects:
Just set the start and end points of the sequence you want to process and click on “Process“.
You can manually edit the video orientation from the output tab, by clicking on “Edit orientation“.
Working with masks
- Only PTGui masks are currently supported, from which only the “red” (exclusive) masks are used (green masks won’t be imported).
- Hugin masks work differently, they are not supported and will not be imported by VideoStitch when you apply a Hugin calibration.
Masks allow for hiding parts of input videos so that they do not get in the final output. Use masks when you want to push the seams of an input video, so that you hide this video and reveal the other overlapping videos. In order to fine tune stitching for a specific feature in the resulting video.
Masks are static over time, you can seek any frame in the video and instantly review how the mask affects the stitched output.
- When applying a PTGui template to a VideoStitch project, the masks will automatically be imported.
- Editing and removing masks has to be done in PTGui.
If multiple masks are overlapping, no image data will appear in the output. The final stitched output will hold a “black hole” ( corresponding to what PTGui would output as alpha channel ).
Rendering the final video
To render the output video file, simply switch to the process panel :
- Set the output file name. Using the ‘browse’ button.
- Review important project settings : Blender, video start and end time and output size. The maximum button will attempt to compute the maximum size.
- Then decide how you want to process the video:
- Hit ‘Process Now’ to start rendering the video immediately. You can decide to render on one or multiple CUDA GPUs.
- ‘Send to batch‘ adds the project to the batch stitcher queue. ‘Send a copy of the project’ is an option to duplicate and save the project with a different name, which will be sent to the batch queue. Your current project will remain opened in VideoStitch so that you can further edit it.
- Set the desired video encoding parameters.
- Select the soundtrack that should be copied from one input video to the output
- Set the projection type and Horizontal Fov values for the output video. If you used an external calibration tool, it is recommended to change the projection and output FOV directly in it.
- Large panoramic videos and fast motion video content require a higher bitrate
- It is highly recommended to use output sizes that are multiples of 16. eg: 1920×960, 3840×1920, 4096×2048, 4800×2400, 5120×2560
- To encode your output video specifically for web and mobile devices, please check-out this blog post: http://www.video-stitch.com/encoding-workflow/
H264: this is the default encoding. It is supported by most softwares and provides the best compression / file size compromise. Maximum resolution = 4096 pixels.
MPEG4: Mpeg4 part2 (not AVC) encoded video. Output size must be a multiple of 8. Maximum resolution = 8192 pixels.
MPEG2: widely supported by video players, it provides an acceptable quality at the price of a high bitrate. Doesn’t support resolutions that are multiple of 4096. (eg. 4096px, 8192px). Very high resolution videos (over 8192 pixels) won’t be decoded properly by most video players and editing suite at such high resolution videos are not common in the industry yet.
Exporting very high resolution sequences
Video encoding for very high resolution output can be problematic :
- when the maximum available bitrate is insufficient for the output resolution’s needs.
- when your video editing suite doesn’t decode properly very high resolution videos (Most video players won’t handle properly videos over 8k).
In such situation, you may want to fall back to an image sequence export, such as *.jpg, or *.tiff
Using the batch stitcher
The batch stitcher is available since VideoStitch 1.2.0 and allows you to prepare multiple VideoStitch projects and process them all at once later.
To add projects to the batch stitcher, you can:
- From VideoStitch Studio, click on ‘send to batch’. If you chose “send a copy of the project”, you can save a copy of the project. The saved copy will be sent to batch, so that you can continue editing your project. This is especially useful if you need to process the same sequence with multiple calibrations for advanced post-processing in a 3rd party software.
- Directly drop a project onto the batch stitcher, or chose ‘File>Add projects’
When using the batch stitcher, it is highly recommended to close projects that are already opened in VideoStitch Studio. We designed VideoStitch Studio to use the best balance of system resources. However video stitching is a resource intensive task, keep in mind that editing a project and stitching in batch will perform rather slow on some systems.
Right click on a project to access various options such as removing, resetting or editing the projects.