VideoStitch v1.1 release
We are very pleased to announce the immediate release of VideoStitch v1.1, the new video stitching engine. It has a list of very, very exciting features !
First of all, the Mac version is here !
You’ve been a lot to ask for this. It has taken some time and now it’s here, for you !
We are releasing it in ‘beta’ state. It has proven stable enough and already processes very fast ! We look forward to user feedback regarding this release.
We’ve been testing it thoroughly on recent Mac systems. Image quality and speed are just as good as the Linux and Windows versions.
And here is the amazing list of new features we’ve added in this new release :
Improved performances, real-time stitching
My mum always told me real-time video stitching was impossible. Then I showed her VideoStitch 1.1.
Playback speed in the preview has had many improvements with the same settings as before. Depending on your configuration and output settings you may even stitch your final video faster than real time.
So you can have an overview of the improvement, here is a comparison between VideoStitch v1.0 and VideoStitch v1.1
The benchmark was done with a GTX470 graphics card, with 1280×960 at 30 fps, 2k export, 1 minute of video, h264 encoding.
|v1.0 ( multiband )||v1.1 ( multiband )||v1.1 ( linear )|
|speed up||reference||2,8x faster||4,33x faster|
What does this number show ?
- With a not so recent graphics card – GTX470 is 3 years old – we are able to process faster than realtime : 56 seconds to process one minute of video.
- With a more recent computer, we can easily process at 100 fps with a 2k export, and 30 fps with a 4k export. On a single graphics card. Of course, we can go faster by using several GPUs.
We are happy to be the fastest stitching engine out there. It doesn’t worth to do any comparison with other stitcher at this moment, we’re far beyond ;-)
Are you wondering how fast it performs on your system ? The preview rendering speed is now displaid directly in the GUI !
Linear blending is now available directly in the GUI, it gives good results while processing much faster and using less memory. Linear blending is available under the process tab, and applies instantly to the output preview, no need to render the output video in order to see the final result !
Copy the sound from one input to the output video.
This has been a popular request, now you can select easily one sound from the process tab if you want to include the sound in the output.
This is a very useful feature for animated footage. It increases the quality a lot, by compensating the automatic exposure of the camera.
Go to Edit / Exposure compensation. Launch the calculation.
Beta feature! Keyframes & new timeline
Showcased at the IVRPA meeting, it is now released as a beta feature, it already proves useful. We recommend using only linear interpolation keyframes at this point if you use the timeline.
With the new timeline, you can :
- Fine tune each input exposure
- Animating yaw/pitch/roll for dynamic leveling
To activate this feature, you need to launch VideoStitch in a beta mode. Procedure is describe in our forum.
Improved workflow with PTGui / Hugin
The workflow and integration with PTGui/Hugin is being improved. Now you can use a PTGui or Hugin project as a template directly in VideoStitch. Just drop your template on the videos and it will apply automatically.
When working on a template with PTGui/Hugin, you can now extract frames from VideoStitch so that it refreshes PTGui/Hugin.
Also, we’ve made various GUI improvements for a more user friendly experience.
Our forum has been online for a few days now. You can register if you want to see all the posts, access to the beta feature and so on. Feel free to ask any question you want !
Our support desk is still present of course, so you can also send us a mail to email@example.com if you prefer.
For the complete changelog, you can follow this link : changelog
Also, please subcribe to our newsletter, and you’ll be in touch with you when we great news for you !