Session: Most “Open-Source” AI Isn’t. And What We Can Do About That

What does “open-source AI” really mean? If you publish the weights for a neural network, is that much different than only publishing an executable binary without the source? What if the model has memorized data or code that it can reproduce without attribution? What if you interrogate a model for why a decision was made, and you get a wrong explanation? How can you debug and fix it, and how can you understand what’s going on?

Black-box AI and ML techniques are very commonly used, but there are alternative approaches that can achieve the aims of the open source movement. Join Dr. Chris Hazard in exploring what it takes to make a truly open-source AI, by diving into what it means to “program with data,” how AI pitfalls relate to classic open-source concerns, and how we can slow down the accrual of “intellectual debt.”

Presenters: