First Newsletter of the Jinesis AI Lab (Zhijing Jin)

38 views
Skip to first unread message

Jinesis

unread,
Feb 23, 2026, 8:33:50 PMFeb 23
to jin...@googlegroups.com

Jinesis AI Lab Newsletter

Website, LinkedIn, X (Lab), X (Zhijing), YouTube


Launch of Our Jinesis AI Lab

  • We formally established the Jinesis AI Lab at the University of Toronto from December 2024. With our Lab Director Prof Zhijing Jin, we have now grown to 50+ active members, from PhDs to Master's students, undergrads, and RAs all over the world. We believe great research needs a global vision, so we have members from Canada, China, France, Germany, India, Italy, Korea, Morocco, Nepal, Netherlands, Nigeria, Pakistan, Romania, Sweden, Switzerland, Vietnam, UK, and the US.

  • Zhijing Jin is one of the two newly appointed Canada CIFAR AI Chairs in 2025, together with David Krueger. In Toronto, she holds affiliations including a Faculty Member at the Vector Institute, Faculty Affiliate at the Schwartz Reisman Institute, and Faculty Member of the Acceleration Consortium (Toronto). She has also received the NSERC Discovery grant from the Canadian government, which is similar to the NSF Career Award for the US.

  • Zhijing serves as a Faculty Affiliate for Center for Human-Compatible AI (CHAI) at UC Berkeley, and Faculty Member at the Future of Life Institute from 2025.

Latest Highlights

  • All our 7 papers to EACL 2026 were accepted, including 2 orals! 🎉

  • At the IASEAI 2026 conference at UNESCO in Paris, we have 3 papers and 9 members from the Jinesis AI Lab. Zhijing will also give an invited talk at the IASEAI 2026 Workshop on Evaluating and Improving LLM Normative Competence.

  • Two of our papers, Accidental Vulnerability (IASEAI 2026) and GovSim (NeurIPS 2024), are cited in the latest International AI Safety Report 2026 led by the Turing Award winner Yoshua Bengio and authored by over 100 AI experts.

  • Our ICLR 2026 paper, “SocialHarmBench,” reveals LLM vulnerabilities to sociopolitical harms such as assisting propaganda and surveillance. This is an important work in one of our core research lines on Sociopolitical Risks of LLMs.

  • Since the founding of Jinesis, we have produced 60 peer-reviewed papers at top AI conferences and 39 pre-prints. Our predoctoral research assistants have gotten into PhD programs at the University of Toronto, ETH, CMU, Cambridge University, and many other prestigious institutions.

  • In 2025, our director Zhijing Jin gave 38 invited talks, including at ACL 2025, the ETH AI Center, the Berkeley CHAI Workshop 2025, the NeurIPS 2025 workshops, and the University of Copenhagen.

  • Feel free to follow our latest updates at http://x.com/ZhijingJin and upcoming events at https://luma.com/jinesis 

Researcher News

  • Zhijing received the ELLIS PhD Award (2025), recognizing the best PhD thesis on AI across Europe. Her dissertation, Causality for Natural Language Processing, highlights how causal inference can strengthen the reliability and social impact of NLP and LLM research.

  • Our PhD student Yongjin Yang received the Connaught International Scholarship, a highly competitive entrance scholarship awarded university-wide to roughly 15–20 of the top incoming international PhD students each year at the University of Toronto. 

  • Our PhD student Yahang Qi has been selected as CANSSI Ontario’s 2025 Cohort of Mdoc Trainees.

  • Our PhD student Rohan Subramani established Aether, an LLM agent safety research group, and has been building a team of several full-time researchers.

  • Our Master’s student Andrei Muresanu was 1 out of the 2 computer science students to receive the Vector Scholarship in AI.

  • We are mentoring three Cooperative AI Research Fellows in our lab – Dr. Van Quynh Thi Truong, Yves Bicker, and Mariana Meireles were among the 11 selected from 1,100+ applications globally (1%). They will be conducting multi-agent AI safety work with us.

  • Since 2025, Zhijing has served as Co-Chair of the ACL Ethics Committee and held senior conference service roles, including Senior Area Chair positions at NAACL 2025 and ACL 2025, and Communications Chair at CLeaR 2025 and 2026. She is also co-organizing the Dahlgstuhl Workshop on Causality and Large Language Models on Apr 07–10, 2026.

Lab News

  • The Jinesis lab has received a total of CA$8 million in research grants since 2025. Big thanks to our major funders, including Coefficient Giving, Schmidt Sciences, UK AISI Alignment Project (partnered with the OpenAI’s Alignment team and The AI Safety Tactical Opportunities Fund), Canadian AI Safety Institute (CAISI) at CIFAR, NSERC, AI Safety Fund, Acceleration Consortium, CANSSI, Survival and Flourishing Fund, and Max Planck Institute for Intelligent Systems.

  • Recognized for our work in frontier AI safety, we are one of four awardees of the Canadian AI Safety Institute (CAISI) Research Program at CIFAR in 2026. 

Latest Publications in 2026

* = Co-first authorship

Recent Talk Videos

Job Posts

Communications Manager and Admin Support: We’re also looking for a CS background part-time/full-time contributor on our lab’s scientific communications and admin support. This is a part-time role at 20–40 hours/week, mainly for people eligible to work in Canada, to be recruited at the University of Toronto. If you are interested, fill out this form, and write “Jinesis Newsletter” in the “How do you know us?” question.

image.jpeg

Group photo of Zhijing and Jinesis students in Toronto, September 2025.


To stay updated on our research and progress as a lab, please follow our social media on LinkedIn, X (Lab), X (Zhijing), and YouTube


To subscribe to future newsletter (or share with others to subscribe), send an email to jinesis+...@googlegroups.com 

Screenshot 2026-02-23 183114.png

Reply all
Reply to author
Forward
0 new messages