Where computer graphics, computer vision, and AI meet | Penn Today
Source: https://penntoday.upenn.edu/news/where-computer-graphics-computer-vision-and-ai-meet
Archived: 2026-04-23 17:22
Where computer graphics, computer vision, and AI meet | Penn Today
Skip to Content
Skip to Content
News from
University of Pennsylvania
Try Advanced Search
Lingjie Liu
, an assistant professor in the
Department of Computer and Information Science
in the
School of Engineering and Applied Science
, uses AI both as a subject of research and as a core methodology for building new 3D visual computing systems. Her work resides at the interface of computer graphics, computer vision, and AI, where she focuses on learning-based methods for representing, reconstructing, and generating 3D humans and scenes from visual observations.
“More broadly, I am interested in how AI can help us move from pixels to structured 3D understanding: recovering geometry, appearance, motion, and dynamics in ways that are not only visually compelling but also controllable and useful for downstream applications,” she says.
View large image
Image: School of Engineering and Applied Science
Liu is intrigued by a new genre of 3D reconstruction and rendering algorithms for human characters and general scenes. She says this area is especially exciting because it sits right between strong mathematical rules about geometric structures and the flexibility of modern machine learning.
Classical computer graphics provides powerful built-in rules regarding geometry, rendering, and animation, while deep learning is able to handle the complexity, ambiguity, and scale of real-world data, she explains. So the ability to combine them is particularly important for rendering realistic human characters and general scenes, which are “highly dynamic, visually complex, and difficult to model with purely hand-engineered pipelines.”
Recently, Liu has been especially excited about projects that push 3D vision and generation beyond visual realism toward more physically accurate motion. For example, in the
PhysCtrl
project, Liu and researchers explored how to make video generation more physically grounded (i.e., consistent with physics) and controllable.
“Instead of generating motion that only looks plausible, the goal is to model physical dynamics in a way that can respond meaningfully to force and material parameters,” she says. “I find that direction exciting because it moves generative models toward being more interpretable, editable, and connected to the physical world.”
Another project,
PhysHMR
, focuses on reconstructing physically plausible human motion from a single-camera video.
“A central challenge there is that motion can look reasonable in image space while still being unstable or physically implausible in 3D,” she says. She adds that in PhysHMR, they address this issue by teaching an AI system to convert visual input into realistic human movement within a physics-based simulator, producing motion that is both visually aligned and physically grounded.
“More broadly, these projects reflect a direction I care a lot about: building models that do not just reconstruct or generate what the world looks like but also capture how it moves and behaves,” Liu says.
In the future, Liu says she sees her work with AI growing toward a much tighter integration of visual modeling, physical reasoning, and controllable generation.
“For a long time, reconstruction and rendering methods have focused primarily on visual realism — making outputs look accurate or photorealistic,” she says. “But I think the next important step is to build models that are not only visually convincing but also physically grounded, structurally consistent, and easier to control.”
On Wednesday, April 22, Liu will discuss “Beyond Photorealism: 3D Reconstruction and Generation with Multimodal and Physical Grounding” from noon to 1:15 p.m. Amy Gutmann Hall, Room 414. The talk is also available on Zoom at
https://upenn.zoom.us/j/91849643116
.
Share this article
Facebook
LinkedIn
Threads
Credits
Writer
Greg Johnson
More from
School of Engineering & Applied Science
Artificial Intelligence
Novel plant-based approach to a better, cheaper GLP-1 delivery system
Health & Medicine
Novel plant-based approach to a better, cheaper GLP-1 delivery system
Research led by Penn Dental’s Henry Daniell investigates the use of a lettuce-based, plant-encapsulated delivery platform as a new oral delivery of two GLP-1 drugs previously approved by the FDA in injectable form.
No brain, no gain: Neuronal activity enhances benefits of exercise
Image: Sciepro/Science Photo Library via Getty Images
Natural Sciences
No brain, no gain: Neuronal activity enhances benefits of exercise
Research led by Penn neuroscientist J. Nicholas Betley and collaborators finds that hypothalamic neurons are essential for translating physical exertion into endurance, potentially opening the door to exercise-mimicking therapies.
Studying Shakespeare through the lens of love
In honor of Valentine's Day, and as a way of fostering community in her Shakespeare in Love course, Becky Friedman took her students to the University Club for lunch one class period. They talked about the movie "Shakespeare in Love," as part of a broader conversation on how Shakespeare's works are adapted.
nocred
Arts & Humanities
Studying Shakespeare through the lens of love
In Becky Friedman’s English course Shakespeare in Love, undergraduate students analyze language, genre, and adaptation in the Bard’s plays through the lens of love.
Beating the heat: Designing cooling for bodies in motion
nocred
Technology
Beating the heat: Designing cooling for bodies in motion
Dorit Aviv, director of Weitzman’s Thermal Architecture Lab, studies how humans, technology, and design intersect, paving the way for the development of novel approaches to cooling people efficiently.
Skip to Content
Skip to Content
News from
University of Pennsylvania
Try Advanced Search
Lingjie Liu
, an assistant professor in the
Department of Computer and Information Science
in the
School of Engineering and Applied Science
, uses AI both as a subject of research and as a core methodology for building new 3D visual computing systems. Her work resides at the interface of computer graphics, computer vision, and AI, where she focuses on learning-based methods for representing, reconstructing, and generating 3D humans and scenes from visual observations.
“More broadly, I am interested in how AI can help us move from pixels to structured 3D understanding: recovering geometry, appearance, motion, and dynamics in ways that are not only visually compelling but also controllable and useful for downstream applications,” she says.
View large image
Image: School of Engineering and Applied Science
Liu is intrigued by a new genre of 3D reconstruction and rendering algorithms for human characters and general scenes. She says this area is especially exciting because it sits right between strong mathematical rules about geometric structures and the flexibility of modern machine learning.
Classical computer graphics provides powerful built-in rules regarding geometry, rendering, and animation, while deep learning is able to handle the complexity, ambiguity, and scale of real-world data, she explains. So the ability to combine them is particularly important for rendering realistic human characters and general scenes, which are “highly dynamic, visually complex, and difficult to model with purely hand-engineered pipelines.”
Recently, Liu has been especially excited about projects that push 3D vision and generation beyond visual realism toward more physically accurate motion. For example, in the
PhysCtrl
project, Liu and researchers explored how to make video generation more physically grounded (i.e., consistent with physics) and controllable.
“Instead of generating motion that only looks plausible, the goal is to model physical dynamics in a way that can respond meaningfully to force and material parameters,” she says. “I find that direction exciting because it moves generative models toward being more interpretable, editable, and connected to the physical world.”
Another project,
PhysHMR
, focuses on reconstructing physically plausible human motion from a single-camera video.
“A central challenge there is that motion can look reasonable in image space while still being unstable or physically implausible in 3D,” she says. She adds that in PhysHMR, they address this issue by teaching an AI system to convert visual input into realistic human movement within a physics-based simulator, producing motion that is both visually aligned and physically grounded.
“More broadly, these projects reflect a direction I care a lot about: building models that do not just reconstruct or generate what the world looks like but also capture how it moves and behaves,” Liu says.
In the future, Liu says she sees her work with AI growing toward a much tighter integration of visual modeling, physical reasoning, and controllable generation.
“For a long time, reconstruction and rendering methods have focused primarily on visual realism — making outputs look accurate or photorealistic,” she says. “But I think the next important step is to build models that are not only visually convincing but also physically grounded, structurally consistent, and easier to control.”
On Wednesday, April 22, Liu will discuss “Beyond Photorealism: 3D Reconstruction and Generation with Multimodal and Physical Grounding” from noon to 1:15 p.m. Amy Gutmann Hall, Room 414. The talk is also available on Zoom at
https://upenn.zoom.us/j/91849643116
.
Share this article
Threads
Credits
Writer
Greg Johnson
More from
School of Engineering & Applied Science
Artificial Intelligence
Novel plant-based approach to a better, cheaper GLP-1 delivery system
Health & Medicine
Novel plant-based approach to a better, cheaper GLP-1 delivery system
Research led by Penn Dental’s Henry Daniell investigates the use of a lettuce-based, plant-encapsulated delivery platform as a new oral delivery of two GLP-1 drugs previously approved by the FDA in injectable form.
No brain, no gain: Neuronal activity enhances benefits of exercise
Image: Sciepro/Science Photo Library via Getty Images
Natural Sciences
No brain, no gain: Neuronal activity enhances benefits of exercise
Research led by Penn neuroscientist J. Nicholas Betley and collaborators finds that hypothalamic neurons are essential for translating physical exertion into endurance, potentially opening the door to exercise-mimicking therapies.
Studying Shakespeare through the lens of love
In honor of Valentine's Day, and as a way of fostering community in her Shakespeare in Love course, Becky Friedman took her students to the University Club for lunch one class period. They talked about the movie "Shakespeare in Love," as part of a broader conversation on how Shakespeare's works are adapted.
nocred
Arts & Humanities
Studying Shakespeare through the lens of love
In Becky Friedman’s English course Shakespeare in Love, undergraduate students analyze language, genre, and adaptation in the Bard’s plays through the lens of love.
Beating the heat: Designing cooling for bodies in motion
nocred
Technology
Beating the heat: Designing cooling for bodies in motion
Dorit Aviv, director of Weitzman’s Thermal Architecture Lab, studies how humans, technology, and design intersect, paving the way for the development of novel approaches to cooling people efficiently.