Traditionally, data quality systems evaluate respondents at specific stages of the survey journey: before entry, at the start, and at the end. Signals at each stage help confirm the respondent’s authenticity and expected behavior.
Those systems continue to work; they’re efficient and necessary. In 2025, we saw a narrow but important gap between those checkpoints. A new question emerged, “What happens if the respondent changes during the survey itself?”
This question surfaced as we began testing survey agents for non-fraud use cases, confirming redirects, validating length of interview, and identifying logic or incidence issues before launch. Around the same time, customers began asking a different kind of question, prompted by open-ended responses that appeared unusually polished and well-crafted. Individually, neither signal was concerning. However, together, they shifted the attention from responses to the stability of the session itself.
The Pattern
As we combined internal testing with real customer examples, a pattern emerged. The issue wasn’t simply what respondents were writing, but how surveys were being completed. In a small number of cases, respondents entered the platform with clean histories, passed pre-survey checks, and began surveys normally. Later in the session, subtle signals shifted. Device characteristics changed. The interaction patterns no longer matched how the session had begun.
This wasn’t widespread, and it wasn’t a failure of pre-survey quality screening. It was a break in continuity, the point at which the respondent who qualified was no longer the one completing the survey.
Why Agentic Takeover
A human navigating a screener for a few minutes and handing off the remainder of the survey to an agent is a far more efficient way to scale survey participation. Understanding why a fraudster would use an agentic takeover is crucial because it directly influences which quality signals are most effective. Automated agents interact differently from real people, and those differences tend to appear after qualification, not before it.
An agentic takeover enables fraudsters to scale their operations and conduct more surveys with less effort.
What Continuity Checks
Continuity focuses on session consistency from start to finish. If device, location, and behavioral signals remain uniform throughout the survey, the session is considered intact. If those signals shift during the survey, the session is flagged for evaluation.
Importantly, continuity is not a judgment on open-ended quality, nor does it assume AI use is widespread. Well-written responses, mobile autocorrect, or respondents using assistance tools are separate considerations. Continuity isolates one specific scenario: when control of the session itself appears to be handed off.
These cases represent a small minority of traffic, but they are significant because they reflect the evolving nature of fraudulent behavior. Understanding where a session changes hands provides researchers with a clearer foundation for evaluating the data that emerges from it, and for staying ahead as tools and incentives continue to evolve.
Mark has nearly 20 years of experience in global business leadership, corporate entrepreneurship, and technology innovation. He has worked passionately within the industry building businesses, tools, and gathering evidence to improve data quality. In his current role as Chief Product Officer of PureSpectrum, he oversees product, engineering, data science, and supply teams across 6 countries and works to deliver new PureSpectrum features that will continue to move the industry forward. Mark graduated from the University of Washington with a double major in Marketing and Corporate Entrepreneurship and added a Certificate in Contract Management from the University of Washington Extension School.
