Skip to content

create my own profile. #129

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 18 commits into
base: main
Choose a base branch
from
26 changes: 13 additions & 13 deletions _config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@
# `jekyll serve`. If you change this file, please restart the server process.

# Site Settings
title : "Lorem ipsum"
description : "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. "
repository : "RayeRen/acad-homepage.github.io"
title : "Welcom to Yining Pan's Homepage"
description : "Always be curious, always be exploring."
repository : "pynsigrid/pynsigrid.github.io"
google_scholar_stats_use_cdn : true

# google analytics
google_analytics_id : # get google_analytics_id from https://analytics.google.com/analytics/
google_analytics_id : G-6ZMFJEP46X

# SEO Related
google_site_verification : # get google_site_verification from https://search.google.com/search-console/about
Expand All @@ -21,14 +21,14 @@ baidu_site_verification : # get baidu_site_verification from https://ziyuan.ba

# Site Author
author:
name : "Lorem ipsum"
avatar : "images/android-chrome-512x512.png"
bio : "Lorem ipsum College"
location : "Beijing, China"
name : "Yining Pan"
avatar : "images/android-chrome-192x192.png"
bio : "PhD student in STUD | A*STAR, focusing on multi-modal perception and generation"
location : "Singapore"
employer :
pubmed :
googlescholar : "https://scholar.google.com/citations?user=YOUR_GOOGLE_SCHOLAR_ID"
email : "[email protected]"
googlescholar : "https://scholar.google.com/citations?user=SCHOLAR_ID&user=l_6n20kAAAAJ"
email : "[email protected]"
researchgate : # e.g., "https://www.researchgate.net/profile/yourprofile"
uri :
bitbucket :
Expand All @@ -37,13 +37,13 @@ author:
flickr :
facebook :
foursquare :
github : # e.g., "github username"
github : "pynsigrid"
google_plus :
keybase :
instagram :
instagram : "@sigridpan"
impactstory : # e.g., "https://profiles.impactstory.org/u/xxxx-xxxx-xxxx-xxxx"
lastfm :
linkedin : # e.g., "linkedin username"
linkedin : yining-pan-187333287
dblp : # e.g., "https://dblp.org/pid/xx/xxxx.html"
orcid : # e.g., "https://orcid.org/xxxx"
pinterest :
Expand Down
2 changes: 1 addition & 1 deletion _includes/author-profile.html
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ <h3 class="author__name">{{ author.name }}</h3>
<li><a href="https://www.xing.com/profile/{{ author.xing }}"><i class="fab fa-fw fa-xing-square" aria-hidden="true"></i> XING</a></li>
{% endif %}
{% if author.instagram %}
<li><a href="https://instagram.com/{{ author.instagram }}"><i class="fab fa-fw fa-instagram" aria-hidden="true"></i> Instagram</a></li>
<li><a href="https://unsplash.com/{{ author.instagram }}"><i class="fab fa-fw fa-instagram" aria-hidden="true"></i> Unsplash</a></li>
{% endif %}
{% if author.tumblr %}
<li><a href="https://{{ author.tumblr }}.tumblr.com"><i class="fab fa-fw fa-tumblr-square" aria-hidden="true"></i> Tumblr</a></li>
Expand Down
41 changes: 5 additions & 36 deletions _pages/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,43 +16,12 @@ redirect_from:
{% assign url = gsDataBaseUrl | append: "google-scholar-stats/gs_data_shieldsio.json" %}

<span class='anchor' id='about-me'></span>
{% include_relative includes/intro.md %}

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. Suspendisse condimentum, libero vel tempus mattis, risus risus vulputate libero, elementum fermentum mi neque vel nisl. Maecenas facilisis maximus dignissim. Curabitur mattis vulputate dui, tincidunt varius libero luctus eu. Mauris mauris nulla, scelerisque eget massa id, tincidunt congue felis. Sed convallis tempor ipsum rhoncus viverra. Pellentesque nulla orci, accumsan volutpat fringilla vitae, maximus sit amet tortor. Aliquam ultricies odio ut volutpat scelerisque. Donec nisl nisl, porttitor vitae pharetra quis, fringilla sed mi. Fusce pretium dolor ut aliquam consequat. Cras volutpat, tellus accumsan mattis molestie, nisl lacus tempus massa, nec malesuada tortor leo vel quam. Aliquam vel ex consectetur, vehicula leo nec, efficitur eros. Donec convallis non urna quis feugiat.
{% include_relative includes/news.md %}

My research interest includes neural machine translation and computer vision. I have published more than 100 papers at the top international AI conferences with total <a href='https://scholar.google.com/citations?user=DhtAFkwAAAAJ'>google scholar citations <strong><span id='total_cit'>260000+</span></strong></a> (You can also use google scholar badge <a href='https://scholar.google.com/citations?user=DhtAFkwAAAAJ'><img src="https://img.shields.io/endpoint?url={{ url | url_encode }}&logo=Google%20Scholar&labelColor=f6f6f6&color=9cf&style=flat&label=citations"></a>).
{% include_relative includes/pub.md %}

{% include_relative includes/honers.md %}

# 🔥 News
- *2022.02*: &nbsp;🎉🎉 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2022.02*: &nbsp;🎉🎉 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.

# 📝 Publications

<div class='paper-box'><div class='paper-box-image'><div><div class="badge">CVPR 2016</div><img src='images/500x300.png' alt="sym" width="100%"></div></div>
<div class='paper-box-text' markdown="1">

[Deep Residual Learning for Image Recognition](https://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf)

**Kaiming He**, Xiangyu Zhang, Shaoqing Ren, Jian Sun

[**Project**](https://scholar.google.com/citations?view_op=view_citation&hl=zh-CN&user=DhtAFkwAAAAJ&citation_for_view=DhtAFkwAAAAJ:ALROH1vI_8AC) <strong><span class='show_paper_citations' data='DhtAFkwAAAAJ:ALROH1vI_8AC'></span></strong>
- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
</div>
</div>

- [Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet](https://github.com), A, B, C, **CVPR 2020**

# 🎖 Honors and Awards
- *2021.10* Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2021.09* Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.

# 📖 Educations
- *2019.06 - 2022.04 (now)*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2015.09 - 2019.06*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.

# 💬 Invited Talks
- *2021.06*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2021.03*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. \| [\[video\]](https://github.com/)

# 💻 Internships
- *2019.05 - 2020.02*, [Lorem](https://github.com/), China.
{% include_relative includes/others.md %}
6 changes: 6 additions & 0 deletions _pages/includes/honers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# 🎖 Honors and Awards
- Singapore International Graduate Award (SINGA) from A*STAR.
- ZJU Graduate of Merit, Triple-A Graduate.
- National Undergraduate Electronics Design Contest, First Prize.
- China College Students’ ‘Internet+’ Innovation and Entrepreneurship Competition, First Prize.
- Win the People Scholarship for three consecutive years (Top 1%).
8 changes: 8 additions & 0 deletions _pages/includes/intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
I am now a second-year PhD student in [IMPL Lab](https://impl2023.github.io/), Singapore University of Technology and Design (SUTD), fortunately supervised by Prof [Na Zhao](https://na-z.github.io/).
I am also supported by the Agency for Science, Technology and Research (A*STAR) [SINGA](https://www.a-star.edu.sg/Scholarships/for-graduate-studies/singapore-international-graduate-award-singa) scholarship and am grateful to be supervised by Prof. [Xulei Yang](https://www.google.com/search?q=xulei+yang&oq=xu&gs_lcrp=EgZjaHJvbWUqEAgCEEUYExgnGDsYgAQYigUyBggAEEUYOTIGCAEQRRhAMhAIAhBFGBMYJxg7GIAEGIoFMggIAxBFGCcYOzIGCAQQRRg7MgoIBRAAGLEDGIAEMgYIBhBFGD0yBggHEEUYPdIBCDMyMjVqMGo3qAIAsAIA&sourceid=chrome&ie=UTF-8#:~:text=Xulei%20Yang%20%2D%20Singapore,%E6%82%A8%E7%BB%8F%E5%B8%B8%E8%AE%BF%E9%97%AE).
Prior to this, I obtained my Master’s degree from Zhejiang University in 2023 and worked as a research intern at Alibaba DAMO Academy.

My research interests include multi-modal scene understanding and generation. Currently, I focus on building a comprehensive understanding of complex scenes by leveraging multi-modal features (e.g., LiDAR and RGB images). I am also interested in transferring learned knowledge to address real-world challenges such as domain shift.


<!-- My research interest includes neural machine translation and computer vision. I have published more than 100 papers at the top international AI conferences with total <a href='https://scholar.google.com/citations?user=DhtAFkwAAAAJ'>google scholar citations <strong><span id='total_cit'>260000+</span></strong></a> (You can also use google scholar badge <a href='https://scholar.google.com/citations?user=DhtAFkwAAAAJ'><img src="https://img.shields.io/endpoint?url={{ url | url_encode }}&logo=Google%20Scholar&labelColor=f6f6f6&color=9cf&style=flat&label=citations"></a>). -->
3 changes: 3 additions & 0 deletions _pages/includes/news.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# 🔥 News
- *2025.05*: &nbsp;🎉🎉🎉 One paper is accepted by ICML 2025!
- *2025.05*: &nbsp;My new homepage is now live!
13 changes: 13 additions & 0 deletions _pages/includes/others.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
<!-- - *2021.10* Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2021.09* Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.

<!-- # 📖 Educations
- *2019.06 - 2022.04 (now)*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2015.09 - 2019.06*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.

# 💬 Invited Talks
- *2021.06*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2021.03*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. \| [\[video\]](https://github.com/)

# 💻 Internships
- *2019.05 - 2020.02*, [Lorem](https://github.com/), China. -->
52 changes: 52 additions & 0 deletions _pages/includes/pub.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# 📝 Publications

A full publication list is available on my [google scholar](https://scholar.google.com/citations?user=SCHOLAR_ID&user=l_6n20kAAAAJ) page.

<!-- Paper 1 -->
<div class='paper-box'><div class='paper-box-image'><div><div class="badge">ICML 2025 </div><img src='images/papers/3-IAL-ICML25.png' alt="sym" width="100%"></div></div>
<div class='paper-box-text' markdown="1">

[**ICML 2025**] [How Do Images Align and Complement LiDAR? Towards a Harmonized Multi-modal 3D Panoptic Segmentation](Paper(Coming-soon)) \\
**Yining Pan**, Qiongjie Cui, Xulei Yang, Na Zhao

- This paper proposes the Image-Assists-LiDAR (IAL) model, which harmonizes LiDAR and images through synchronized augmentation, token fusion, and prior query generation.
- IAL achieves SOTA performance on 3D panoptic benchmarks, outperforming baseline methods by over 4%.

<!-- [**Project**](https://scholar.google.com/citations?view_op=view_citation&hl=zh-CN&user=DhtAFkwAAAAJ&citation_for_view=DhtAFkwAAAAJ:ALROH1vI_8AC) <strong><span class='show_paper_citations' data='DhtAFkwAAAAJ:ALROH1vI_8AC'></span></strong> -->
<!-- - Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. -->
</div>
</div>

<!-- Paper 2 -->
<div class='paper-box'><div class='paper-box-image'><div><div class="badge">CVPR 2024 </div><img src='images/papers/2-InstructVideo-CVPR24.png' alt="sym" width="100%"></div></div>
<div class='paper-box-text' markdown="1">

[**CVPR 2024**] [InstructVideo: Instructing Video Diffusion Models with Human Feedback](https://arxiv.org/abs/2312.12490) \\
H. Yuan, S. Zhang, X. Wang, Y. Wei, T. Feng, **Yining Pan**, Y. Zhang, Z. Liu, S. Albanie, D. Ni \\
[![GitHub Stars](https://img.shields.io/github/stars/damo-vilab/i2vgen-xl?style=social)](https://github.com/damo-vilab/i2vgen-xl)
[![GitHub Forks](https://img.shields.io/github/forks/damo-vilab/i2vgen-xl?style=social)](https://github.com/damo-vilab/i2vgen-xl)
[[Project page]](https://instructvideo.github.io/)

- InstructVideo is the first research attempt that instructs video diffusion models with human feedback.
- InstructVideo significantly enhances the visual quality of generated videos without compromising generalization capabilities, with merely 0.1% of the parameters being fine-tuned.

<!-- [**Project**](https://scholar.google.com/citations?view_op=view_citation&hl=zh-CN&user=DhtAFkwAAAAJ&citation_for_view=DhtAFkwAAAAJ:ALROH1vI_8AC) <strong><span class='show_paper_citations' data='DhtAFkwAAAAJ:ALROH1vI_8AC'></span></strong> -->
<!-- - Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. -->
</div>
</div>

<!-- Paper 3 -->
<div class='paper-box'><div class='paper-box-image'><div><div class="badge">ICCV 2023 </div><img src='images/papers/1-RLIPv2-ICCV23.png' alt="sym" width="100%"></div></div>
<div class='paper-box-text' markdown="1">

[**ICCV 2023**] [RLIPv2: Fast Scaling of Relational Language-Image Pre-training](https://arxiv.org/abs/2308.09351) \\
H. Yuan, S. Zhang, X. Wang, S. Albanie, **Yining Pan**, T. Feng, J. Jiang, D. Ni, Y. Zhang, D. Zhao \\
[![GitHub Stars](https://img.shields.io/github/stars/JacobYuan7/RLIPv2?style=social)](https://github.com/JacobYuan7/RLIPv2)
[![GitHub Forks](https://img.shields.io/github/forks/JacobYuan7/RLIPv2?style=social)](https://github.com/JacobYuan7/RLIPv2)

- RLIPv2 elevates [RLIP](https://arxiv.org/abs/2209.01814) by leveraging a new language-image fusion mechanism, designed for expansive data scales.

<!-- [**Project**](https://scholar.google.com/citations?view_op=view_citation&hl=zh-CN&user=DhtAFkwAAAAJ&citation_for_view=DhtAFkwAAAAJ:ALROH1vI_8AC) <strong><span class='show_paper_citations' data='DhtAFkwAAAAJ:ALROH1vI_8AC'></span></strong> -->
<!-- - Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. -->
</div>
</div>
2 changes: 1 addition & 1 deletion _sass/_utilities.scss
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ body:hover .visually-hidden button {
}

.fa-instagram {
color: $instagram-color;
color: $github-color;
}

.fa-lastfm,
Expand Down
Loading