Below are the notes collected from the Siggraph 2025 Birds of a Feather event on pipelines. The BoF space was organized into topic groups where attendees could move between topics and discuss as they wished. As one of the organizers, I asked folks in each group to try and keep notes as much as possible. Below is the result of that process.
These notes ask as many questions as they answer, follow different styles and are often terse. However, they also provide interesting insight into the topics provided and what was being discussed by each group.
Notes by Topic
Accurately Measuring Artist Experience
tools & metrics - solvable
workflows are more complicated
cloud workstations
artists - candid & having good internal collab between artists & toolchain teams
tech, psychology, sociology components to this
- 5 whys (+1)
- training on sociology & how generational group interact vital
- artists very vocal when things are missing, monitoring how internet is used can take you
- collab with ACM SIGGRAPH potential
investing in a UX team - interviewing the artists in the team using datadog to collect app usage stats & every click
- TDTechnical Director output not very durable
- UX team critical in-betweeners
- (+ product managers)
change management as a missing link
getting to a common way of working
- standards compliance improvements leads to better interoperability
EA also invested in UX - how to balance multiplying the costs like this
- Dreamworks’ UX is outpacing their ability to dev and make improvements
- not all solved, still balancing velocity for UX
- gathering the metrics still a challenge
- can we capture ROI?
- just tracking clicks not enough
- not having adoption from artists also super bad
has anybody invested in TD training so they learn HCIHuman Computer Interaction?
- expectation of quicker turnarounds
- tech doesn’t want to hand over
input latency tolerance - no clear threshold in the room for what latency is tolerable
- complaints are based on craft group & experience
- Dreamworks offering workstations to clients but isn’t instrumenting/measuring the thresholds
- office can have benefits for pedagogy
- artists need to be engaged to provide
- small scale test at JW Studios to tome out artist workflows
every 3-4 months a questionnaire to everyone to be “give us your honest opinion on what we are doing right now?”
tech artist comment: artist don’t necessarily know that the process can be better - they just do the miserable thing
artists first try to off-road; their workloads become very strange
telemetry is great, if you can instrument all your tools
it’s good & bad - telemetry creates slowdowns in the environment
quantifying only gets you so far - relationships are a big component
- a lot gets lost in just learning what the ground truth is, like how to prioritize
- improving at all levels in team communications is hard to do, easy to request
- how to provide more avenues for communications?
- hard to prioritize due to resourcing, too
- how do you now the benefit?
UX approach - Bardel & Icon, 200-1000 people
- deps with technical artists highly attached to pipeline
- gave them ways to commit issues & fixes
- internal “open-source”
- elevated technical assistants
- artists with dev experience
- liaison for their department
- they direct these metrics for you
interface mock-ups & validating with users prior
- using Figma to create interactive prototypes
- devs also get better insights
- QT Designer with signals usable for interface prototyping
- issues: overheads; expectation of being further ahead than the tool is
- comms challenge can be avoided by using more abstract tooling
what metrics?
- latency checks / spikes.
- popularity of toolsets - in search
- emoji feedback (sad, straight, happy)
functionality testing is very hard
- tools need to use production data which makes it difficult to test
no QA/testing person
- products in our space are not considered products at all
- retrospective on goals is valuable
- when set a goal you get locked into a decision, especially in consensus based cultures
need to create incentives for everyone and balance incentives with velocity/rituals
Pipelines of People: Elevating the Human Factor
pipeline is less technical and more psychological
pipeline is there to serve the people
pipe & connections
- it’s not about the thing it’s the why
- make the carrot not the stick
- it’s the personal factor
lowest common factor/denominator should be avoided
practical side: listening actively
helping the studio
- the pipeline is the people
talk & conversation
- more production-centric
- get to their level, time to ack.
- pipeline post-mortems
- it’s not rigid
- make sure to get positive feedback
- increase efficiency is the purpose
- the pipeline should affect the culture
- feedback loops
why context switching fatigue
how convince executives
- prototype/wireframe
- business $$
- trust/relationships
Pipeline Stewardship & Evolution
bring old code up to snuff?
- burn eyes maintaining legacy code
- interview artist what features need to stay find the pieces they use (user stories)
- write tests so we don’t break things
- very hard to keep documentation valid & useful
- video for documentation
- building stringency into the product (eg deprecations)
- if I decide to deprecate a function, I have to do it & put up the merge requests
- raised hand problem - you do it since you found it
- how are you certain if your pipe guru leaves you you can keep going forward
- when you put comment don’t describe what you are doing, but why
- you have to go beyond unit tests
how do you scope making changes?
- do you look at show breaks? or does everything need to keep moving?
trust from production
- confident releases
- some folks are trustful & others not
- sometimes you just have to go
has anyone brought in risk analysis?
- studio tech grows out of need, we end up with really inter-dependant systems & not modular at all
- people love to innovate, but there is a place & time
- releasing software bundles to tech leads first
- we need to support building a staging environment
- run assets/shots through staging (advanced users)
- friendly users (not cranky)
- if you want a benefit you have to be part of the testing
- testers might not tell you if they have bugs, they just switch back
- make submitting tickets super easy
too many support channels
- how to communicate & broadcast pipeline changes?
- make it less scary & more informative
- this is how this helps you
unit tests vs integration / full e2e tests
anyone supporting production-facing web apps? how do you version them?
does anyone give artists the ability to deploy taking a change?
- they don’t own the code, it’s just the CI
how do you measure performance diff?
- instrumentation for DCCDigital Content Creation app startup time
- there are so many variables
compulsory AI review on code changes
- helps a lot
- sharing the whole code base with AI & let it flag all un-optimized code
what does telemetry mean in different context?
- collect data in sentry, login info. be ahead of prod, be able to detect broken things
- measure tool versions
- file server latency, when render farm breaks
- head off problems before they happen
combining it
- tools has theirs, systems has theirs - hope to store it all in the same spot
- centralized is idea (want to but not like this for many)
how can telemetry help prevent issues?
- can see it but not everyone uses it
- getting into elastic & using anomaly detection
- others via DataDog, easy to get error reports & peaks & valleys
any questions from studios about having too much data?
- commercial products like datadog can cost a lot
- open source is useful because we can control it, and want data in perpetuity
- how long do you keep the data?
- varying based on if the data is hot and who actually wants it
- finding balance of time
- have to track render farm for weather
- not tracking errors, more about who is using
- which tools use which versions of other things, what to maintain
what problem are you solving with logging of your data?
does everyone put licensing in? (yes)
small studio, more for tracking, logs, investigating error. Suggests code use error as warning instead so not all “errors” are logged
how loang it takes to load a scene
store how often something happens, helps with figuring out true impact
grafana will show yhe tools
most frequently used app (for workstation caching)
Jager open source app for sending data to teams like it’s a repo that will process it later
tracking cross-platform?
- not always easy to link and correlate
track login vs performance
broken channel in slack
sometimes it’s not server performance but the network
track web apps too? use QT, too? (yes)
- most web tools are artist tools
- things like ShotGrid, how to know if it’s you or them?
- move to datadog and ping every so often to see
- then you can also see up and down status
- look up ISP performance, too to make sure it isn’t network
use AI agents to run reports
- how is this different from crons?
- smarter, no need to write all the possibilities.
- Ex: management wants data for everything, AI agent simplifies it
- Ex: deadline made a csv for each file, make a list of questions and used an agent which gave back time, pool resources, etc
command center (wdas)
- helps to see what to optimize
- sups like it, artists were hesitant
- used previous production data to estimate new show, used headcount, but team left so didn’t finish & had a compressed schedule
- clack channel for all 911 issues & grafana pages
Conclusion
Although I admit that these notes are not all coherent, I hope that you found a few nuggets and take-aways. At the very least, I hope that they gave you a decent sense of what people are thinking about and how the discussions progressed.
Thanks for reading!