Skip to content

Commit ea668d6

Browse files
authored
Add 2xLiveActionV1_SPAN (#476)
1 parent df0add1 commit ea668d6

File tree

2 files changed

+74
-0
lines changed

2 files changed

+74
-0
lines changed

data/models/2x-LiveActionV1-SPAN.json

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
{
2+
"name": "2xLiveActionV1_SPAN",
3+
"author": "jcj83429",
4+
"license": "CC-BY-NC-SA-4.0",
5+
"tags": [
6+
"compression-removal",
7+
"deblur",
8+
"dehalo",
9+
"general-upscaler",
10+
"jpeg",
11+
"photo",
12+
"restoration",
13+
"video-frame"
14+
],
15+
"description": "SPAN model for live action film and digital video. The main goal is to fix/reduce common video quality problems while maintaining fidelity. I tried the existing video-focused models and they all denoise or cause colour shifts so I decided to train my own.\n\nThe model is trained with compression (JPEG, MPEG-4 ASP, H264, VP9, H265), chroma subsampling, blurriness from multiple scaling, uneven horizontal and vertical resolution, oversharpening halos, bad deinterlacing jaggies, and onscreen text. It is not trained to remove noise at all so it preserves details in the source well. To prevent colour/brightness shifts, I used consistency loss in neosr. I had to modify consistency loss to use a stronger blur so it doesn't interfere with the halo removal.\n\nLimitations:\n1. The model has limited ability to see details through heavy grain, but light to moderate grain is fine.\n2. The model still does not handle bad deinterlacing perfectly, especially if the source is vertically resized. Fixing bad deinterlacing is not the main goal so it is what it is. Sources that are line-doubled throughout should be descaled back to half height first for best results.\n3. The model sometimes oversharpens a little. This is probably because the training data has some oversharpened images.\n4. This model generally cannot handle VHS degradation.\n\nMore comparisons: https://slow.pics/c/DtDN7gaq\n\nThe training config and image degradation scripts used to create training data can be found in https://github.com/jcj83429/upscaling/tree/9332e7d5b07747ff347e5abdc43f8144364de9f7/2xLiveActionV1_SPAN",
16+
"date": "2025-05-19",
17+
"architecture": "span",
18+
"size": [
19+
"48nf"
20+
],
21+
"scale": 2,
22+
"inputChannels": 3,
23+
"outputChannels": 3,
24+
"resources": [
25+
{
26+
"platform": "pytorch",
27+
"type": "pth",
28+
"size": 8947821,
29+
"sha256": "8b166c75831ea7f694d9058ee9c8df8148af8cc1d2b57e69e6581b15cab572f7",
30+
"urls": [
31+
"https://raw.githubusercontent.com/jcj83429/upscaling/f73a3a02874360ec6ced18f8bdd8e43b5d7bba57/2xLiveActionV1_SPAN/2xLiveActionV1_SPAN_490000.pth"
32+
]
33+
},
34+
{
35+
"platform": "onnx",
36+
"type": "onnx",
37+
"size": 1654748,
38+
"sha256": "bfa72f3c6347076aed140d0836cee30c27ea434c047beeaf9466469483836ecc",
39+
"urls": [
40+
"https://github.com/jcj83429/upscaling/raw/f73a3a02874360ec6ced18f8bdd8e43b5d7bba57/2xLiveActionV1_SPAN/2xLiveActionV1_SPAN_490000.onnx"
41+
]
42+
}
43+
],
44+
"trainingIterations": 490000,
45+
"trainingEpochs": 271,
46+
"trainingBatchSize": 20,
47+
"trainingHRSize": 128,
48+
"trainingOTF": false,
49+
"dataset": "nomosv2",
50+
"datasetSize": 36000,
51+
"images": [
52+
{
53+
"type": "paired",
54+
"caption": "xvid bad quality",
55+
"LR": "https://i.slow.pics/MqRdbeSL.webp",
56+
"SR": "https://i.slow.pics/k1sDOhPk.webp"
57+
},
58+
{
59+
"type": "paired",
60+
"caption": "720p slightly oversharpened",
61+
"LR": "https://i.slow.pics/TCCKrYSs.webp",
62+
"SR": "https://i.slow.pics/BIRBcKK9.webp"
63+
},
64+
{
65+
"type": "paired",
66+
"caption": "1080p noisy and soft",
67+
"LR": "https://i.slow.pics/GJhVMCUy.webp",
68+
"SR": "https://i.slow.pics/h9VzPczR.webp"
69+
}
70+
]
71+
}

data/users.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -122,6 +122,9 @@
122122
"jacob": {
123123
"name": "Jacob"
124124
},
125+
"jcj83429": {
126+
"name": "jcj83429"
127+
},
125128
"jingyunliang": {
126129
"name": "JingyunLiang"
127130
},

0 commit comments

Comments
 (0)