Why We Ran CONNECT: Boston
Proteomics has reached an inflection point. Instruments are faster, data is richer, and the science is more interconnected than ever. And yet, in too many pipelines, proteomics still shows up at the end of the story - explaining what happened, instead of shaping what happens next.That tension drew us to Boston - arguably the centre of today’s life science universe - to ask a simple but pressing question:
What will it take for proteomics to become a decision‑engine, not just a post‑mortem diagnostic tool?
Our theme for the night captured that ambition:
Upstream Proteomics, Downstream Impact – How Leadership + Influence Can Drive a Paradigm Shift.
This time, we wanted to centre a different kind of leadership: the voices of early‑career researchers (ECRs) who are shaping the next era of proteomics. In partnership with HUPO ECR, we set out to create more than another panel: we wanted a catalyst.
A space where emerging scientists could sit shoulder‑to‑shoulder with pharma directors, academics, core facility leads, and startup founders-and explore the grey areas between science, culture, and leadership.
To keep the exchange candid, we ran the evening in a fishbowl format: a circle of chairs, with two kept open so anyone could step in, speak, and step back out. Our role as hosts was simply to guide the flow. The real spark came from the mix of perspectives: ECRs beside industry veterans, technologists beside biologists, all probing what influence really looks like in proteomics today.
Above: The fishbowl format awaiting guests before the discussion began.
The conversation ran under the Chatham House Rule: ideas could be used, but not attributed to any individual or organisation. That commitment turned polite panel talk into honest debate. To honour that, this write‑up shares perspectives and stories, not names or affiliations.
Our intent for the night was simple:
- Surface hard questions and trade‑offs
- Bridge academia, pharma, clinical, and computational worlds
- Highlight emerging leadership practices and real‑world wins
- Spark collaborations that move proteomics forward
This wasn’t about polished takes; it was about progress. Exploring the space between perspectives and uncovering what proteomics needs next.
That spirit set the stage for a keynote that asked everyone in the room to rethink not just how we collect data, but how we lead through it.
Setting the Scene: A Keynote on Speed, Scale, and Meaning
“If results are still coming back six months later, they’re already too late.”
Our keynote speaker - a proteomics leader from large‑scale drug discovery - didn’t ease into the topic. They opened with a story about barrier‑breaking, comparing the evolution of proteomics to Roger Bannister’s four‑minute mile. Once one person proves something is possible, the entire field’s definition of “normal” shifts almost overnight.
In proteomics, that barrier has been throughput.
Five years ago, running a handful of samples per day felt cutting‑edge. Today, with advances in LC‑MS, automation, and multiplexing, hundreds-even thousands-of proteomes can be measured daily. The room was full of people who have helped make that possible.
But as our speaker reminded us, speed alone isn’t progress.
“We’ve smashed the throughput barrier. Now we’re staring at a meaning barrier.”
The bottleneck has moved downstream-from the instrument to the interface between people, data, and decisions. The problem is no longer generating data; it’s making sense of it quickly enough for it to matter.
They painted a scene that felt uncomfortably familiar: scientists spending days cleaning files, exporting spreadsheets, and fighting version chaos while project decisions roll on without them. By the time a report lands, the chemistry has already moved on.
Listening, it became clear the call wasn’t just technical-it was cultural. The keynote wasn’t asking us to change what we measure, but how we work.
Proteomics, they argued, must evolve from a retrospective service into a live decision‑support engine-one that helps teams act while questions are still fresh. That means embracing cloud compute, automation, and intuitive visualisation so insights can flow at the same pace as curiosity.
Above: Our keynote speaker shared how their team moved from static reports to interactive data suites.
But they also zoomed out to leadership. Do teams really have the infrastructure-and the permission-to move at the speed of their data? It’s not only about technology; it’s about mindset.
- Do we encourage scientists to act on partial but robust clarity, or do we wait for impossible completeness?
- Do we treat data as currency for decisions, or as documentation for audits?
In closing, they returned to purpose. The reason for all this speed and structure isn’t efficiency for its own sake-it’s impact.
If proteomics can give us confidence earlier, we fail less, design better, and patients see therapies sooner.
For a moment, the room went quiet. It felt like a reset - the perfect launchpad for the debate that followed.
Inside the Fishbowl: Where Science Meets Real Talk
With that provocation hanging in the air, the fishbowl began. The format encouraged spontaneity: anyone could step into the open chair to share a perspective, challenge an assumption, or ask the next question.
The core circle brought together:
- A director of proteomics at a global pharma company
- A senior mass spectrometry leader from a protein‑focused biotech
- A principal scientist from an innovation institute
- A head of proteomics at a targeted therapy company
- A postdoctoral fellow from a leading medical school
From there, the circle kept expanding as more people stepped in. We opened with a deceptively simple question:
“What do you do with all that data - and how do you make it meaningful?”
The discussion circled a shared frustration: drug discovery isn’t short on big data or big ideas, yet the most basic question - what does this drug actually do to its protein target and the broader biology? - often remains only partially answered.
One participant put it bluntly: late‑stage failures aren’t just bad luck; they’re a symptom of protein blindness - the habit of moving molecules forward without fully understanding the underlying protein biology. With phase‑3 failure rates stubbornly high, and most drugs aimed at proteins, the opportunity is obvious: make proteomics central to discovery and development, not a supporting slide at the end.
The first spark from the audience came from an academic researcher:
“From academia, it feels like proteomics is already an established technology. But what I’m hearing is resistance. Do you still have to fight to prove its value in drug discovery?”
The question resonated. A senior industry leader confirmed the battle for trust isn’t over. The challenge is no longer to prove that the instruments work; it’s to prove that proteomics changes decisions, not just data tables.
Another contributor captured the pressure neatly:
“You run a proteomics experiment and then you have to convince leadership: what are we doing and why does it matter? Data quality, your confidence, and how you tell that story to different audiences are everything.”
As we listened, a pattern emerged: generating data isn’t enough. It has to be actionable.
One team described how interactive dashboards had changed their project meetings. Instead of emailing slides after the fact, they filtered live to show that all off‑targets were clustering on a single protein family. What used to be a retrospective report turned into an in‑room decision.
By building shared workspaces where chemists, biologists, and project leaders can explore data together, proteomics shifts from a one‑way report‑out to a shared decision surface.
Another participant grounded it with a story from their own company:
“Proteomics started as a way to explain expensive phase‑II failures. Over time, investing up‑front became clearly cheaper than failing late. When we turned proteomics into an early decision tool, engagement from leadership went through the roof.”
What stood out was how quickly the conversation moved from data to people-from pipelines to leadership behaviours. It wasn’t just about better analysis; it was about trust, mentorship, and whether leaders create space for learning and experimentation.
Toward the end, the focus shifted to AI. Someone offered a dry, balanced observation:
“People throw “AI” around too loosely. Yesterday it was statistics, then “big data,” now it’s AI. The tools change, but the core idea is evolving, not replacing everything that came before.”
The laughter that followed wasn’t cynical; it was recognition. We’re working in a field that’s constantly rebranding itself while racing ahead, and sometimes it’s hard even for experts to keep up.
By the close of the discussion, one theme kept surfacing: proteomics will only reach its potential if both the science and the way we lead it evolve together.
Five Key Takeaways: Actions for Emerging and Established Leaders
For early‑career researchers, these takeaways offer a blueprint for building influence early-how to communicate across disciplines, deliver insight fast, and become decision‑makers rather than data providers.
For established leaders, they act as a mirror-a reminder that culture, infrastructure, and mentorship determine whether proteomics thrives or stalls.
Together, they sketch what it may take to turn proteomics from “interesting data” into a shared language of discovery.
We heard and observed five recurring themes.
Discussion 1: Make Proteomics Decision‑Ready
There was near‑unanimous agreement: proteomics won’t gain influence until it drives decisions. Several participants admitted their data often arrives too late, buried in static decks that never quite spark the right conversation.
One leader shared how their team flipped that script. In a key meeting, they live‑filtered the dataset to show that all off‑targets clustered on a single protein class. The answer landed in the room, in real time-not weeks later. That changed how their organisation viewed interactive data.
Another panelist offered a simple rule:
“Convert complexity into something people can explore for themselves. If every answer requires a specialist interpreter, you’ll create decision fatigue, not clarity.”
We also heard how much it matters to meet leadership where they are:
“Be willing to explain the same idea five different ways, in language your stakeholders actually use. That’s not dumbing it down; it’s doing your job.”
The teams that positioned proteomics as an up‑front decision tool rather than a retrospective report saw engagement from leadership leap.
Key takeaway #1: Clarity builds influence
- For ECRs: Practice clarity and iteration. Learn to tell the story of your data in leadership language, not just technical language.
- For leaders: Build structures where data can be explored, not just presented. The best decisions arise from shared curiosity, not one‑way slides.
Discussion 2: Time‑Box from Sample → Insight
Speed matters. Discovery teams often iterate chemistry in three‑to‑four‑week cycles. Proteomics has to move on that tempo if it wants a seat at the table.
Panelists mapped out a realistic, decision‑aligned timeline:
- Sample preparation in ~2 days
- MS runs in a few more
- Cloud‑based compute delivering interpretable data within a week
If results land while chemists are still excited about the next analogue, proteomics becomes a partner. If insights arrive months later, they’re little more than a historical note.
As one contributor put it:
“If your insight arrives while the team is still talking about that series, they’ll use it. If it lands three cycles later, you’ve missed the window.”
The group was careful not to equate speed with sloppiness. Automation, templated QC, and well‑defined thresholds can allow both pace and rigour.
Another panellist captured the leadership responsibility:
“Your solutions have to match the scale and speed of the teams you serve. Do that, and they’ll champion you.”
Key takeaway #2: Timing creates trust
- For ECRs: Learn the tempo of your collaborators and deliver insights at the moment they matter most.
- For leaders: Resource teams for timeliness, not false perfection-and empower them to act when momentum is high and the signal is strong.
Discussion 3: Speak Cross‑Functional – The “Duolingo” Approach
Proteomics sits at the intersection of biology, chemistry, and informatics, which makes translation non‑negotiable.
One panelist joked that they speak three languages at work: one to biologists, another to technicians, and a third to engineers. Each group frames problems differently, and proteomics only lands when it’s spoken in their terms.
Another suggested treating the move into industry like a language‑learning app:
“In your first year, find mentors in chemistry, PK/PD, and early biology. Learn how each discipline defines success and risk. That’s your Duolingo.”
Core facility leaders echoed this. Executives don’t need gigabytes of raw data; they need bite‑sized, directional signals that link to decisions.
A recurring theme:
- Join meetings outside your comfort zone
- Pair with mentors from other functions
- Always end your update with, “Here’s what we’d do next”
An early‑career researcher on the panel summed it up:
“Do the work well first, then back yourself. Good results will often speak for you-but only if you’ve learned how to tell the story.”
Key takeaway #3: Translation is leadership
- For ECRs: Translation builds trust. Practice framing your findings in terms of other disciplines’ priorities, not just your own.
- For leaders: Model the behaviour. Ask for clarity, not excess complexity, and reward scientists who make the science accessible across functions.
No one in the room argued about whether throughput is possible. The debate was about where the real bottleneck now sits.
The consensus: mass spectrometers are rarely the limiting factor. Data handling is.
Several speakers were blunt: trying to run modern proteomics at scale purely on‑premises is asking for trouble. Cloud compute, automated data movement, and scripted workflows have become table stakes for teams that want to keep up.
“Don’t spend your best scientists’ time babysitting files. Automate the plumbing so they can think.”
Standardised QC emerged as a quiet hero. When QC is embedded and transparent-rather than rebuilt for every project-teams gain confidence in the results and stop over‑debugging the pipeline.
One story highlighted how automating steps between structural methods (like HDX) and modelling shaved months off analysis timelines, instantly raising trust with leadership and collaborators.
Crucially, everyone agreed: automation isn’t about replacing expertise; it’s about liberating it.
Key takeaway #4: Automation is empowerment
- For ECRs: Learn to design reproducible, scripted workflows early. Automation is one of the most powerful forms of influence you can develop.
- For leaders: Invest in infrastructure that scales insight, not just throughput. Consistency and transparency are forms of credibility.
Discussion 5: Validate with Function, Not Endless Orthogonal Assays
The final theme drew a mix of laughter and relief. Nearly everyone had been stuck in validation limbo-rerunning assay after assay to confirm what the data had already said loudly enough.
One panellist shared a simple habit:
“Start by planning the functional readout you’ll trust when you see it. Don’t bolt validation on at the end.”
Another warned against chasing speed at the expense of robustness:
“As a field, we’re being given another chance to get this right. Don’t throw away robustness for the sake of a faster slide.”
Many felt that excessive validation often reflects a lack of trust-in the data, in the pipeline, or in how results will be received by leadership. The fix isn’t always more experiments; often it’s clearer QC, better documentation, and earlier agreement on what “good enough” looks like.
Key takeaway #5: Function over fear
- For ECRs: Design experiments with functional end‑points in mind from the start. Avoid open‑ended assay cascades that never quite reach a decision.
- For leaders: Build trust in the process and be explicit about thresholds for action. Empower teams to move when confidence is justified, not only when every possible box is ticked.
Conclusion: From Protein‑Blind to Protein‑Brave
The Boston fishbowl didn’t end with neat answers - it ended with momentum. Across disciplines and organisations, teams are already showing how these practices can transform workflows and influence decisions.
What became clear is that protein blindness isn’t a data problem; it’s a decision problem.
Solving it will take courage on both sides of the career spectrum:
- ECRs willing to speak up, connect disciplines, and lead from the bench
- Leaders willing to build cultures where ideas move as fast as the data
When proteomics becomes collaborative, decision‑ready, and functionally grounded, it stops being a post‑mortem exercise and becomes the engine of discovery.
For us, CONNECT: Boston was a reminder that progress in proteomics isn’t about generating more data. It’s about creating shared understanding-and building the cultures and systems where that understanding can change what we do next.
The shift from protein‑blind to protein‑aware is already being led by the next generation of scientists. Our job, collectively, is to give them the tools, trust, and time to finish what they’ve started.
---------
This blog post was produced by Assoc. Prof. Andrew Webb using a combination of original notes from discussions and insights. All statistics and quotes referenced are drawn from internal research and published sources. Final compilation was completed with assistance from ChatGPT. Any errors or omissions are unintentional, and the content is provided for informational purposes only. The views, thoughts, and opinions expressed in this text belong solely to the author, and not necessarily to the author's employers, organization, committees or other group or individual