亚洲十八**毛片_亚洲综合影院_五月天精品一区二区三区_久久久噜噜噜久久中文字幕色伊伊 _欧美岛国在线观看_久久国产精品毛片_欧美va在线观看_成人黄网大全在线观看_日韩精品一区二区三区中文_亚洲一二三四区不卡

AI6126代做、Python設計程序代寫

時間:2024-04-12  來源:  作者: 我要糾錯



2023-S2 AI6126 Project 2
Blind Face Super-Resolution
Project 2 Specification (Version 1.0. Last update on 22 March 2024)
Important Dates
Issued: 22 March 2024
Release of test set: 19 April 2023 12:00 AM SGT
Due: 26 April 2023 11:59 PM SGT
Group Policy
This is an individual project
Late Submission Policy
Late submissions will be penalized (each day at 5% up to 3 days)
Challenge Description
Figure 1. Illustration of blind face restoration
The goal of this mini-challenge is to generate high-quality (HQ) face images from the
corrupted low-quality (LQ) ones (see Figure 1) [1]. The data for this task comes from
the FFHQ. For this challenge, we provide a mini dataset, which consists of 5000 HQ
images for training and 400 LQ-HQ image pairs for validation. Note that we do not
provide the LQ images in the training set. During the training, you need to generate
the corresponding LQ images on the fly by corrupting HQ images using the random
second-order degradation pipeline [1] (see Figure 2). This pipeline contains 4 types
of degradations: Gaussian blur, Downsampling, Noise, and Compression. We will
give the code of each degradation function as well as an example of the degradation
config for your reference.
Figure 2. Illustration of second-order degradation pipeline during training
During validation and testing, algorithms will generate an HQ image for each LQ face
image. The quality of the output will be evaluated based on the PSNR metric
between the output and HQ images (HQ images of the test set will not be released).
Assessment Criteria
In this challenge, we will evaluate your results quantitatively for scoring.
Quantitative evaluation:
We will evaluate and rank the performance of your network model on our given 400
synthetic testing LQ face images based on the PSNR.
The higher the rank of your solution, the higher the score you will receive. In general,
scores will be awarded based on the Table below.
Percentile
in ranking
≤ 5% ≤ 15% ≤ 30% ≤ 50% ≤ 75% ≤ 100% *
Scores 20 18 16 14 12 10 0
Notes:
● We will award bonus marks (up to 2 marks) if the solution is interesting or
novel.
● To obtain more natural HQ face images, we also encourage students to
attempt to use a discriminator loss with a GAN during the training. Note that
discriminator loss will lower the PSNR score but make the results look more
natural. Thus, you need to carefully adjust the GAN weight to find a tradeoff
between PSNR and perceptual quality. You may earn bonus marks (up to 2
marks) if you achieve outstanding results on the 6 real-world LQ images,
consisting of two slightly blurry, two moderately blurry, and two extremely
blurry test images. (The real-world test images will be released with the 400
test set) [optional]
● Marks will be deducted if the submitted files are not complete, e.g., important
parts of your core codes are missing or you do not submit a short report.
● TAs will answer questions about project specifications or ambiguities. For
questions related to code installation, implementation, and program bugs, TAs
will only provide simple hints and pointers for you.
Requirements
● Download the dataset, baseline configuration file, and evaluation script: here
● Train your network using our provided training set.
● Tune the hyper-parameters using our provided validation set.
● Your model should contain fewer than 2,276,356 trainable parameters, which
is 150% of the trainable parameters in SRResNet [4] (your baseline network).
You can use
● sum(p.numel() for p in model.parameters())
to compute the number of parameters in your network. The number of
parameters is only applicable to the generator if you use a GAN.
● The test set will be available one week before the deadline (this is a common
practice of major computer vision challenges).
● No external data and pre-trained models are allowed in this mini
challenge. You are only allowed to train your models from scratch using the
5000 image pairs in our given training set.
Submission Guidelines
Submitting Results on CodaLab
We will host the challenge on CodaLab. You need to submit your results to CodaLab.
Please follow the following guidelines to ensure your results are successfully
recorded.
● The CodaLab competition link:
https://codalab.lisn.upsaclay.fr/competitions/18233?secret_key
=6b842a59-9e76-47b1-8f56-283c5cb4c82b
● Register a CodaLab account with your NTU email.
● [Important] After your registration, please fill in the username in the Google
Form: https://forms.gle/ut764if5zoaT753H7
● Submit output face images from your model on the 400 test images as a zip
file. Put the results in a subfolder and use the same file name as the original
test images. (e.g., if the input image is named as 00001.png, your result
should also be named as 00001.png)
● You can submit your results multiple times but no more than 10 times per day.
You should report your best score (based on the test set) in the final report.
● Please refer to Appendix A for the hands-on instructions for the submission
procedures on CodaLab if needed.
Submitting Report on NTULearn
Submit the following files (all in a single zip file named with your matric number, e.g.,
A12345678B.zip) to NTULearn before the deadline:
● A short report in pdf format of not more than five A4 pages (single-column,
single-line spacing, Arial 12 font, the page limit excludes the cover page and
references) to describe your final solution. The report must include the
following information:
○ the model you use
○ the loss functions
○ training curves (i.e., loss)
○ predicted HQ images on 6 real-world LQ images (if you attempted the
adversarial loss during training)
○ PSNR of your model on the validation set
○ the number of parameters of your model
○ Specs of your training machine, e.g., number of GPUs, GPU model
You may also include other information, e.g., any data processing or
operations that you have used to obtain your results in the report.
● The best results (i.e., the predicted HQ images) from your model on the 400
test images. And the screenshot on Codalab of the score achieved.
● All necessary codes, training log files, and model checkpoint (weights) of your
submitted model. We will use the results to check plagiarism.
● A Readme.txt containing the following info:
○ Your matriculation number and your CodaLab username.
○ Description of the files you have submitted.
○ References to the third-party libraries you are using in your solution
(leave blank if you are not using any of them).
○ Any details you want the person who tests your solution to know when
they test your solution, e.g., which script to run, so that we can check
your results, if necessary.
Tips
1. For this project, you can use the Real-ESRGAN [1] codebase, which is based
on BasicSR toolbox that implements many popular image restoration
methods with modular design and provides detailed documentation.
2. We included a sample Real-ESRGAN configuration file (a simple network, i.e.,
SRResNet [4]) as an example in the shared folder. [Important] You need to:
a. Put “train_SRResNet_x4_FFHQ_300k.yml” under the “options” folder.
b. Put “ffhqsub_dataset.py” under the “realesrgan/data” folder.
The PSNR of this baseline on the validation set is around 26.33 dB.
3. For the calculation of PSNR, you can refer to ‘evaluate.py’ in the shared folder.
You should replace the corresponding path ‘xxx’ with your own path.
4. The training data is important in this task. If you do not plan to use MMEditing
for this project, please make sure your pipeline to generate the LQ data is
identical to the one in the configuration file.
5. The training configuration of GAN models is also available in Real-ESRGAN
and BasicSR. You can freely explore the repository.
6. The following techniques may help you to boost the performance:
a. Data augmentation, e.g. random horizontal flip (but do not use vertical
flip, otherwise, it will break the alignment of the face images)
b. More powerful models and backbones (within the complexity
constraint), please refer to some works in reference.
c. Hyper-parameters fine-tuning, e.g., choice of the optimizer, learning
rate, number of iterations
d. Discriminative GAN loss will help generate more natural results (but it
lowers PSNR, please find a trade-off by adjusting loss weights).
e. Think about what is unique to this dataset and propose novel modules.
References
[1] Wang et al., Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure
Synthetic Data, ICCVW 2021
[2] Wang et al., GFP-GAN: Towards Real-World Blind Face Restoration with Generative
Facial Prior, CVPR 2021
[3] Zhou et al., Towards Robust Blind Face Restoration with Codebook Lookup Transformer,
NeurIPS 2022
[4] C. Ledig et al., Photo-realistic Single Image Super-Resolution using a Generative
Adversarial Network, CVPR 2017
[5] Wang et al., A General U-Shaped Transformer for Image Restoration, CVPR 2022
[6] Zamir et al., Restormer: Efficient Transformer for High-Resolution Image Restoration,
CVPR 2022
Appendix A Hands-on Instructions for Submission on CodaLab
After your participation to the competition is approved, you can submit your results
here:
Then upload the zip file containing your results.
If the ‘STATUS’ turns to ‘Finished’, it means that you have successfully uploaded
your result. Please note that this may take a few minutes.

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp


















 

標簽:

掃一掃在手機打開當前頁
  • 上一篇:代做IDEPG001、代寫c/c++,Java編程設計
  • 下一篇:CSI 2120代做、代寫Python/Java設計編程
  • 無相關信息
    昆明生活資訊

    昆明圖文信息
    蝴蝶泉(4A)-大理旅游
    蝴蝶泉(4A)-大理旅游
    油炸竹蟲
    油炸竹蟲
    酸筍煮魚(雞)
    酸筍煮魚(雞)
    竹筒飯
    竹筒飯
    香茅草烤魚
    香茅草烤魚
    檸檬烤魚
    檸檬烤魚
    昆明西山國家級風景名勝區
    昆明西山國家級風景名勝區
    昆明旅游索道攻略
    昆明旅游索道攻略
  • 短信驗證碼平臺 理財 WPS下載

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 kmw.cc Inc. All Rights Reserved. 昆明網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    国产欧美日韩久久| 精品久久久久久久久久久久久久久| 岛国视频午夜一区免费在线观看| 91高清在线观看| 成人丁香基地| 青青青草原在线| 国内激情视频在线观看| 成人午夜大片| 中文日韩在线| 欧美国产日韩一二三区| 亚洲国产精品天堂| 精品伦理精品一区| 五月婷婷在线观看| 欧美午夜在线播放| 亚洲精品资源| 国产精品无码永久免费888| 欧美日韩国产首页| 成人资源www网在线最新版| 有色激情视频免费在线| 97人人在线视频| 自拍视频一区| 国产a区久久久| 亚洲成av人片在线观看| 午夜刺激在线| 亚洲不卡系列| 亚洲综合国产激情另类一区| 国产精品日产欧美久久久久| 精品少妇一区二区三区免费观看| 黄色片网站在线观看| av毛片在线免费看| 自拍自偷一区二区三区| 国产精品一区二区黑丝| 91久久精品一区二区| 国产资源在线看| 伦理一区二区| 成人精品高清在线| 欧美一区午夜视频在线观看| 影音先锋中文在线视频| 日韩欧美精品一区| 国产亚洲污的网站| 很黄很污的网站| 国产成人精选| 麻豆视频一区二区| 欧美在线视频日韩| 2017天天干夜夜操| freexxx性亚洲精品| 天天久久综合| 中文字幕中文字幕中文字幕亚洲无线| 在线视频xx| 激情不卡一区二区三区视频在线| 国产不卡av一区二区| 成人不卡免费av| 天天综合网天天做天天受| 国内小视频在线看| 亚洲激情黄色| 国产宾馆实践打屁股91| 宅男在线国产精品| 国产经典一区| 久久精品国产亚洲a| 国产色一区二区| 黄色直播在线| 亚洲品质自拍| 亚洲欧洲精品一区二区三区 | 亚洲精品色图| 欧美性69xxxx肥| 岛国片av在线| 美女免费视频一区| 波霸ol色综合网| 成人午夜网址| 日本一区二区在线不卡| 嫩草嫩草嫩草| 精品国产aⅴ| 亚洲一区二区av在线| 欧美日本一道| 亚洲欧美视频| 7777精品伊人久久久大香线蕉最新版| 福利视频一区| av一区二区不卡| 性视频一区二区三区| 91中文字幕精品永久在线| 久久久国产精品一区二区中文| 91国偷自产一区二区使用方法| 老色鬼在线视频| 国产大陆a不卡| 在线一级视频| 黄色日韩精品| 欧美日韩国产首页在线观看| 久久亚洲国产精品尤物| 久久久久久久综合日本| 欧美日韩国产乱码电影| 国产日本亚洲| 亚洲人一二三区| 8x8ⅹ拨牐拨牐拨牐在线观看| 九色|91porny| 天天在线女人的天堂视频| 欧美日韩影院| 亚洲免费电影在线| 999av小视频在线| 国产精品1区2区| 黄色在线小视频| 久久久久久自在自线| h七七www色午夜日本| 97精品在线| 777午夜精品视频在线播放| 日韩精品视频中文字幕| 国产女人水真多18毛片18精品视频 | 日韩高清一区二区| 蜜臀在线观看| 亚洲一区黄色| 人成福利视频在线观看| 99亚洲视频| 高清视频在线www色| 国产精品婷婷| 色黄视频在线| 日本中文字幕一区二区视频| 香蕉视频在线观看网站| 日日噜噜夜夜狠狠视频欧美人| 情趣视频网站在线免费观看| 国产一区二区三区的电影| ga∨成人网| 在线日韩欧美| 在线观看你懂得| 久久99热国产| 欧美成年黄网站色视频| 成人福利视频网站| av手机在线观看| 中文字幕在线不卡| 一区二区三区日本视频| 亚洲自拍与偷拍| 久久久久观看| 日韩一区二区不卡| 成人交换视频| 亚洲综合999| 青青草这里只有精品| 欧美日韩国产成人在线91| 日韩av密桃| 国产一级粉嫩xxxx| 奇米888四色在线精品| 中国日本在线视频中文字幕| av在线不卡电影| www成人在线视频| 色婷婷亚洲综合| 羞羞答答成人影院www| 原千岁中文字幕| 国产成人无遮挡在线视频| 日本三级一区| 精品久久久久久中文字幕大豆网| 国产一区不卡| 男人天堂av网站| 国产久卡久卡久卡久卡视频精品| www视频在线观看| 午夜一区二区三区在线观看| 国产在视频线精品视频www666| 丁香婷婷自拍| 成人精品免费看| 日韩福利影视| 欧美一区二区在线不卡| 日韩黄色免费网站| av蜜臀在线| 色婷婷国产精品| 亚洲专区一区| 丁香六月综合| 91精品国产色综合久久ai换脸| 亚洲激情另类| 日本不良网站在线观看| 欧美日韩亚洲综合在线 欧美亚洲特黄一级 | 日韩欧美在线观看视频| 亚洲私人影院| 最新日本在线观看| 亚洲国产另类精品专区| 欧美一区影院| a毛片在线观看| 色一区在线观看| 日本亚洲三级在线| 黄色成人在线视频| 免费人成网ww777kkk手机| 不卡一区二区中文字幕| 国产精品久久久网站| 最近最新mv在线观看免费高清| 亚洲国产高清不卡| 久久久久久久久久久久久久 | 国产在线精品一区二区 | 亚洲午夜成aⅴ人片| 亚洲天堂激情| 在线天堂中文资源最新版| 9191久久久久久久久久久| 国产一区二区三区在线看麻豆| 成人性片免费| 原千岁中文字幕| 亚洲人吸女人奶水| 亚洲福利一区| 国产精品久久久久久吹潮| 黄色无遮挡网站| 中文乱码免费一区二区| 亚洲免费二区| 欧美日韩视频网站| 中国一级特黄毛片大片| 一个色在线综合| 久久精品国产第一区二区三区| 国产精品一区二区三区四区在线观看 |