Nearly half of the global economy runs on physical operations — trucking, construction, agriculture, oil and gas, logistics, field services, utilities and infrastructure, and more. These industries keep our shelves stocked, our power on, and our economy moving. They can also be some of the most dangerous.
At Motive, we believe that artificial intelligence (AI) can and will improve safety in these sectors. It’s not just about preventing accidents — it’s about saving lives, protecting jobs, and building a better, more resilient physical economy. But for AI to deliver on that promise, we need to be transparent about how our products perform. Unfortunately, not everyone in the industry agrees.
The stakes are too high to hide
In nearly every field where AI is applied — from medical diagnostics to content generation — rigorous benchmarking is the norm. Independent evaluations, transparent metrics, side-by-side comparisons. That’s how innovation happens. And it’s how trust is earned.1
But when it comes to AI-powered driver safety technologies, that standard is being undermined. Instead of encouraging transparency, some vendors, like Samsara, are fighting to prevent it.
Samsara’s Terms of Service don’t allow customers to perform benchmark testing without Samsara’s express prior written consent. Adding an anti-benchmarking clause is anti-competitive and anti-safety.
The clause that keeps customers in the dark
For years, software companies have quietly inserted restrictive benchmarking clauses into their contracts.2 These clauses allow them to sue customers, researchers, and scientists to prevent them from benchmarking the performance of their products — even when those comparisons would improve safety outcomes on our roads.3
It’s a tactic borrowed from early software disputes — often called the “DeWitt Clause”4,5 — but the implications here are far more dangerous. In the physical economy, weak AI isn’t just a product issue. It’s a safety risk. And keeping customers in the dark only increases that risk.
In 2023, Motive commissioned a study from the Virginia Tech Transportation Institute (VTTI) to independently benchmark the performance of leading AI dash cams. The findings were clear: Motive’s AI successfully detected unsafe driving behavior up to 4x more than Samsara.
Samsara did not like the findings of the Virginia Tech study or the fact that they were losing major customers to Motive because of the superior accuracy of our AI Dashcam. They are suing Motive to prevent us from further distributing the Virginia Tech study and to prevent us from commissioning more independent third-party studies. I think we should ask why.
It’s time to raise the standard
We’ve removed the DeWitt Clause from our terms of service because it promotes censorship, prevents others from benefiting from independent research about product performance, and diminishes transparency in markets predicated on safety. We encourage you to read the study. Run a side-by-side trial. And, most importantly, decide for yourself based on more, not less, information.
We welcome public comparisons. We’ll show up, and we’ll stand by the results.
We know performance benchmarks make our products better and roads safer. We welcome them. And we think vendors in this space should feel the same. Driver safety is too important.
The bottom line
In a world where 50% of GDP is powered by high-risk, physical operations, safety isn’t a nice-to-have — it’s a competitive edge. It keeps workers protected, operations running, and businesses growing. If we want to build trust in this technology, we have to be willing to test it. And share the results.
At Motive, we’re proud to lead the way. Let’s raise the bar for the entire industry.
- For autonomous vehicles, for example, safety benchmarking is a must. Shouldn’t safety benchmarking matter more when there’s a driver? Waymo released AV benchmarking several years ago, and a common standard for AV benchmarking (including collision avoidance) is being developed. ↩︎
- See Electronic Frontier Foundation 2005 article on EULA terms that do not allow public criticism of products and why they violate the First Amendment and consumer protection principles. ↩︎
- See David Wheeler 2017 essay on the DeWitt Clause and why it should be illegal. ↩︎
- See Databricks 2021 blog post advocating for the elimination of DeWitt Clauses ↩︎
- See Cube 2022 blog post explaining the Databricks / Snowflake benchmark wars. ↩︎



