Digital Production Pipeline Association

Notes from Global CG Pipelines BoF
Tuesday, Aug 12, 2025

Below are the notes collected from the Siggraph 2025 Birds of a Feather event on pipelines. The BoF space was organized into topic groups where attendees could move between topics and discuss as they wished. As one of the organizers, I asked folks in each group to try and keep notes as much as possible. Below is the result of that process.

These notes ask as many questions as they answer, follow different styles and are often terse. However, they also provide interesting insight into the topics provided and what was being discussed by each group.

Notes by Topic

Accurately Measuring Artist Experience

tools & metrics - solvable

workflows are more complicated

cloud workstations

artists - candid & having good internal collab between artists & toolchain teams

tech, psychology, sociology components to this

investing in a UX team - interviewing the artists in the team using datadog to collect app usage stats & every click

change management as a missing link

getting to a common way of working

EA also invested in UX - how to balance multiplying the costs like this

has anybody invested in TD training so they learn HCIHuman Computer Interaction?

input latency tolerance - no clear threshold in the room for what latency is tolerable

every 3-4 months a questionnaire to everyone to be “give us your honest opinion on what we are doing right now?”

quantifying only gets you so far - relationships are a big component

UX approach - Bardel & Icon, 200-1000 people

interface mock-ups & validating with users prior

what metrics?

functionality testing is very hard

no QA/testing person

need to create incentives for everyone and balance incentives with velocity/rituals

Pipelines of People: Elevating the Human Factor

pipeline is less technical and more psychological

pipeline is there to serve the people

pipe & connections

lowest common factor/denominator should be avoided

practical side: listening actively

helping the studio

talk & conversation

why context switching fatigue

how convince executives

Pipeline Stewardship & Evolution

bring old code up to snuff?

how do you scope making changes?

trust from production

has anyone brought in risk analysis?

too many support channels

unit tests vs integration / full e2e tests

anyone supporting production-facing web apps? how do you version them?

does anyone give artists the ability to deploy taking a change?

how do you measure performance diff?

compulsory AI review on code changes

Tangible Telemetry: Tools and Use-Cases

what does telemetry mean in different context?

combining it

how can telemetry help prevent issues?

any questions from studios about having too much data?

what problem are you solving with logging of your data?

track web apps too? use QT, too? (yes)

use AI agents to run reports

command center (wdas)

Conclusion

Although I admit that these notes are not all coherent, I hope that you found a few nuggets and take-aways. At the very least, I hope that they gave you a decent sense of what people are thinking about and how the discussions progressed.

Thanks for reading!

Suggest edits to this page on GitHub