Infognition forum
October 24, 2017, 05:27:18 AM *
Welcome, Guest. Please login or register.

Login with username, password and session length
News: Last Video Enhancer version: 2.2
 
   Home   Help Search Login Register  
Pages: [1] 2 3 ... 7
  Print  
Author Topic: Techniques coupled in with super resolution?  (Read 56691 times)
Manoman455
Full Member
***

Karma: +4/-0
Posts: 109


View Profile
« on: August 25, 2010, 04:19:53 PM »

Basically I was thinking, is it possible to couple other techs with super resolution? like maybe fractals to increase edge sharpness. I have also been looking at line accents, it makes (to my eyes) jagged lines looks the way they are supposed to be, smooth. I'm not talking about topaz enhance/clean as I have used them and found that sometimes there would be "speckle" artifacts in some areas near edges and small details, would it be possible to somehow use the brilliance of super resolution to get information from multiple frames>recreate current frame>use accents coupled with info gather from multiple frames to smooth out lines that are jagged? and during the super resolution process determine the fractals of the current frame?
These are just ideas and I know that they wont get implemented, but i just want to hear an experts opinion.
Thanks.
I would like to hear some others' ideas if any.
Logged
Dee Mon
Administrator
Hero Member
*****

Karma: +13/-0
Posts: 743



View Profile WWW
« Reply #1 on: August 26, 2010, 11:31:11 AM »

An important part of our super resolution method is spatial upsampling of input frame. When SR can't find a good match in motion compensated high-res previous frame, it uses values from upsized current frame. So making spatial upsizing better will benefit overall quality. First, Lanczos3 was used for this upsampling, then another method (similar but with different kernel). If we could use another upsampling method with smoother less jagged lines that would be good. It could be fractal resizing or, for example, NNEDI. Here's a sample:
http://stuff.infognition.com/karate-nnedi3.png
http://stuff.infognition.com/karate-ve194.png
You can see that although NNEDI is less detailed than SR (and it's ok because it's just in-frame upsizing) the edges are not so jagged. Unfortunately this method is very slow. I guess fractal resizing may be even slower since it requires thorough search of similar parts of the picture.

Another thought: we need to check how fractal resizing works with video. This method might be unstable, causing edges to shake from frame to frame. Because it's based on finding similarities inside the image and in different frames it can find different results.

So after all your idea is nice and worth trying. Later we could try to implement fractal or nnedi upsizing and see how it works with SR.
Logged
Manoman455
Full Member
***

Karma: +4/-0
Posts: 109


View Profile
« Reply #2 on: August 30, 2010, 06:48:48 AM »

Hmmm NNEDI3 is very slow, I use it myself and have long and hard try to find an order or method to have them cooperate with SR, But I get unsatisfactory results.
Now I know it would be CPU intensive but what would be the best results if I used NNEDI3 with SR?
NNEDI3 then use SR?
or...
SR then use NNEDI3?

I was thinking again LOL. How would you implement NNEDI3?
would video enhancer make a SR frame (the frame created using before and after frames in the sequence), and also a NNEDI3 frame and using information from both frames (whatever information that might be) to recreate the current frame with high details and smoothed out jagged edges? [this doesn't need to be answered because its not even planned, but if you have an idea I would very much like to hear]

Also I did research on video enhancer before (I think) and knew the answer to this but I forgot, how many frames before and after the current frame does video enhancer use to recreate the current frame? Is there a command or a way to increase this number? would it even help if set it higher (quality wise)?

OK back to ideas that can possibly maybe coupled with SR. I was looking at frame rate conversion methods, and a lot of them use interpolation or some kind of motion based blur. Correct if I'm wrong. I have yet to see/hear of a full fledged frame rate converter that uses technology similar to SR. Is it possible to implement frame rate conversion using video enhancers SR algorithm?
I like this idea alot.

OK I'm done, what are your thoughts?
« Last Edit: August 30, 2010, 06:55:06 AM by Manoman455 » Logged
Dee Mon
Administrator
Hero Member
*****

Karma: +13/-0
Posts: 743



View Profile WWW
« Reply #3 on: August 30, 2010, 08:23:44 AM »

Quote
what would be the best results if I used NNEDI3 with SR?
NNEDI3 then use SR?
or...
SR then use NNEDI3?

I don't think using NNEDI with SR separately one after another (in any order) makes sense, but using NNEDI inside SR might be a good thing.

Quote
I was thinking again LOL. How would you implement NNEDI3?
would video enhancer make a SR frame (the frame created using before and after frames in the sequence), and also a NNEDI3 frame and using information from both frames (whatever information that might be) to recreate the current frame with high details and smoothed out jagged edges?

This is exactly what I'm talking about.

Quote
how many frames before and after the current frame does video enhancer use to recreate the current frame?

Information is accumulated frame by frame, so in a sense all previous frames are used, though technically just one last SR frame is used. Future frames are not used in VE now.

Quote
I have yet to see/hear of a full fledged frame rate converter that uses technology similar to SR. Is it possible to implement frame rate conversion using video enhancers SR algorithm?
I like this idea alot.

I've seen such a technology in MSU Video Group:
http://www.compression.ru/video/frame_rate_conversion/index_en.html

You're right, SR is similar and making a frame rate converter out of it is quite possible. You're not the first who came up with this idea. How do you think, is there is a demand for an offline (not realtime) frame rate converter?
Logged
Henery
Newbie
*

Karma: +4/-0
Posts: 35


View Profile
« Reply #4 on: August 30, 2010, 10:32:49 AM »

Right now i can think two things that can benefit from framerate upconversion.
1)Security camera film clips with very low fps.
2)Setting mobile phone video capture settings to "night capture" (i hope you know what i mean) usually drops videos framerate dramatically (i get 4-5 fps with my Nokia).Using that framerate upconversion technology means that we can get low noise videos with smooth motion.

About using fractals in videos.I read somewhere that the Boris Uprez filter (by BorisFX) uses fractals (in some form) on upscaling.Still Video Enhancer is better since i think Uprez doesnt gather information from multiple frames.I hope you can add Uprez and ODUs (by Engelmann media) results to quality comparison site.

Logged
Dee Mon
Administrator
Hero Member
*****

Karma: +13/-0
Posts: 743



View Profile WWW
« Reply #5 on: August 30, 2010, 02:40:45 PM »

Hi Henery,

thanks for mentioning those two solutions, we've missed them in our new comparison (currently in progress). Now we'll add them too.

In some security cameras FPS is close to 2-3 or even 1. For such very low FPS I think even motion-compensated frame rate conversion will fail: the frames are often too different to infer intermediate frames.
Logged
Manoman455
Full Member
***

Karma: +4/-0
Posts: 109


View Profile
« Reply #6 on: August 31, 2010, 03:43:07 AM »

Quote
Information is accumulated frame by frame, so in a sense all previous frames are used, though technically just one last SR frame is used. Future frames are not used in VE now.

Hmm I thought VE also worked with future frames as well, but I can't argue with the maker lol.
well then.....
If I had 2 exact copies of of a video and reversed one of them, then ran the backwards and the other copy through VE(separately), then reverse that backwards video so it plays forward again then somehow blend the results.(all work is done uncompressed), would there be any point quality wise? would that be considered accumulating from previous and future frames?

Back to frame rate conversions....
I have seen all of MSU filters and they are great, I use their free filters all the time but like the link you provided some are for companies and I would guess the price is pretty high if I were to ask them to sell to me. I don't know what the current demand for a frame-rate converter is but I know I would get one from you if it surpasses MSU's frame-rate converter(the free one for avisynth) in quality, and/or produces somewhat correct looking results with out/less artifacts.
And then in a future version you can possibly/maybe add a check box that says "create In-between frames and double frame rate"(I say In-between frames because I think the average Joe would understand it better then saying interpolate).

Another Idea.....
1. Stills Detection (to increase processing speed by reusing previously SR generated frame)[this might already be present but I didn't know].
2. (This idea is to improve quality). It is stated that the SR algorithm only use previous to recreate the current, but is it the previous original frames or the SR processed frames? I have thought of a 2-pass method which might work or might be pointless.
Heres a picture of how it would/should work.

Thats it for now.
Thanks.
« Last Edit: August 31, 2010, 03:45:19 AM by Manoman455 » Logged
Dee Mon
Administrator
Hero Member
*****

Karma: +13/-0
Posts: 743



View Profile WWW
« Reply #7 on: August 31, 2010, 09:04:57 AM »

Quote
If I had 2 exact copies of of a video and reversed one of them, then ran the backwards and the other copy through VE(separately), then reverse that backwards video so it plays forward again then somehow blend the results.(all work is done uncompressed), would there be any point quality wise? would that be considered accumulating from previous and future frames?
Yes, this can give some improvement. I've just tried it by simply averaging videos processed in both directions and got 0.2 dB PSNR improvement. Not too great but some lines became smoother.
Actually when I worked with MSU Group on MSU Super Resolution we did similar thing: averaging motion-compensated frames from past and future, i.e. blending inside SR instead of blending SR results. That gave a good improvement. This approach is the first to be implemented in "super high quality" mode someone requested here earlier.

Quote
Stills Detection
Nice idea.

Quote
It is stated that the SR algorithm only use previous to recreate the current, but is it the previous original frames or the SR processed frames?
It's SR processed frames. Here's a scheme:
SR_1 = upsample(Original_Frame_1)
SR_2 = smart_blend( upsample(Original_Frame_2), motion_compensate(SR_1) )
SR_3 = smart_blend( upsample(Original_Frame_3), motion_compensate(SR_2) )
SR_4 = smart_blend( upsample(Original_Frame_4), motion_compensate(SR_3) )
...
Here you can see how by using last SR frame we use all previous information.
Logged
Manoman455
Full Member
***

Karma: +4/-0
Posts: 109


View Profile
« Reply #8 on: August 31, 2010, 08:31:49 PM »

Hmm, wow that reversed idea worked... hmm, I got to give it a try, can I get instructions on how exactly you did it?
I was also wondering, but If I had every variation of the original video (backwards, rotated 90/180/270 degrees,flipped horizontal and vertical) that would give me about 7 different input sources (original unmodified video is included) to feed to VE and I would have 7 outputs. If I then corrected the orientation of all the variants to play forward and upright, blended all 7 results that should in my mind at-least have an improvement, might be wrong. Basically I was think if one variation of the video make the 0.2 improvement then there shouldn't be a reason why 6 other variants wouldn't make an even better improvement.


Quote
Stills Detection
Nice idea.

Quote
It is stated that the SR algorithm only use previous to recreate the current, but is it the previous original frames or the SR processed frames?
It's SR processed frames. Here's a scheme:
SR_1 = upsample(Original_Frame_1)
SR_2 = smart_blend( upsample(Original_Frame_2), motion_compensate(SR_1) )
SR_3 = smart_blend( upsample(Original_Frame_3), motion_compensate(SR_2) )
SR_4 = smart_blend( upsample(Original_Frame_4), motion_compensate(SR_3) )
...
Here you can see how by using last SR frame we use all previous information.

That smart_blend, where can I find it?
Is there something similar for Avisynth or virtualdub, other 3rd party software?
I've been looking and can only find overlay filters and effects but no blending/averaging.
I took a look at MSU SR and it seems they don't look at future frames either.
It seems like such a simple idea yet I guess its harder to implement then I thought.
Owells, Thanks again for the quick response.
Logged
Sunny666
Newbie
*

Karma: +0/-0
Posts: 8


View Profile
« Reply #9 on: September 01, 2010, 12:11:07 AM »

I have used double(or triple) fps technique after nnedi3 upsampling for increasing quality of webcam videos and i can say that current version of video enhancer recovers more details than nnedi3. Hope future realise will bring even more quality.
Logged
Dee Mon
Administrator
Hero Member
*****

Karma: +13/-0
Posts: 743



View Profile WWW
« Reply #10 on: September 01, 2010, 07:40:55 AM »

Manoman455,
Quote
I was also wondering, but If I had every variation of the original video (backwards, rotated 90/180/270 degrees,flipped horizontal and vertical)

Rotation and flips don't add any new information. 2+3 = 3+2. Reversing in time does give new information: future frames. So I wouldn't expect anything but heating up some air with CPU.

Quote
That smart_blend, where can I find it?
Nowhere  Cool it's inside SR and is the key part of our engine.

To average video from two clips you can use Merge function of AviSynth.
http://avisynth.org/oldwiki/index.php?page=Merge
Like this:
Code:
v1 = AVISource("smiths-ve.avi")
v2 = AVISource("smiths-rev-ve.avi").Reverse()
v1.Merge(v2, 0.5)


Sunny666,
Quote
current version of video enhancer recovers more details than nnedi3
That's expected since nnedi3 is still only in-frame upsizing technique.
Logged
Henery
Newbie
*

Karma: +4/-0
Posts: 35


View Profile
« Reply #11 on: September 03, 2010, 04:07:54 PM »

What about using SR for detail recovery? I have a concept:If you look closely heavily compressed videos blocks,you should see that some blocks have more details than others.But in next frame those more detailed blocks look flat and less detailed.So my idea here is that Video Enhancer would try to pick those more detailed blocks and then intelligently blends them over less detailed blocks of other frames.This would be possible to do to parts with motion but maybe better to static parts of video.I hope you understood what i meant.

Another thing is that,would it be possible for VEs next version of deblocking to remove those bright red blocks which even MSUs Smart Deblocking cant remove.You can see those red block artifacts on edges.
Logged
Manoman455
Full Member
***

Karma: +4/-0
Posts: 109


View Profile
« Reply #12 on: September 03, 2010, 10:49:25 PM »

The red blocks is probably caused my the output codec used, when ever I use ffdshow to outpuf in h.264 in red edges I would see blocks, but when I encode with Xvid, there was no blocks.
Just my opinion.
Logged
Dee Mon
Administrator
Hero Member
*****

Karma: +13/-0
Posts: 743



View Profile WWW
« Reply #13 on: September 04, 2010, 12:00:00 PM »

1. Idea of deblocking using details from other frames is interesting but requires a lot of thought and changes in SR, its current approach cannot do that.

2. Can you post a screenshot with those red blocks?
Logged
Henery
Newbie
*

Karma: +4/-0
Posts: 35


View Profile
« Reply #14 on: October 29, 2010, 10:51:02 PM »

1)Is it possible to use SR on video stabilisation? I think that would be very useful tool alongside framerate upconverter and we would get better enlargements too. VReveals stabilisation filter just zooms in on video (i suppose),so parts of the video are left outside. MSUs Deshaker filter is more advanced because it fills so called "unknown areas" while keeping all the parts in video.This is where i think the SR would do its job.

2)VE relies on Virtualdub filters,but would it there be a possibility that someday VE uses its own,higher quality filters developed to work better with super resolution?Such filters would be atleast deinterlacer,denoiser and maybe detail recovery.

3)I accidentally deleted the clip where i had those red blocks but i try to find another example somewhere else.

4)I just noticed one thing:nobody has requested using SR for 2D to 3D conversion! 3D is todays word,so adding somekind of 3D conversion technology would definately bring some more attention to this already marvellous software.Engelmann media has a program called Makeme3D and it uses multiframe analysis and object detection to get sharper edges.It also supports three kinds of 3D glasses:polarization,shutter and anaglyph (even those cheap paper/cardboard ones).So what do you others think about VE with 3D conversion technology and those another requests?
Logged
Pages: [1] 2 3 ... 7
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.13 | SMF © 2006-2011, Simple Machines LLC Valid XHTML 1.0! Valid CSS!