Recently, I attended a founders social event in San Francisco—around 80 people, mostly founders and investors. It was a Partiful event, and the only information I had for each person was a name (sometimes just a social handle from X or Instagram). That sparked a thought:
What if I could research everyone there, understand who they are, and find thoughtful ways to connect more meaningfully?
That kicked off an experiment:
Could AI help me prep for events and build deeper, smarter connections?
I gave myself just four hours to design, build, and test the entire work.
Step 1: The AI Tools I Tried (and Why They Fell Short)
I started with the obvious approach: export all the names into a CSV and run them through tools like Manus, Suna.so, and ChatGPT’s deep research mode. The goal is to find each person’s LinkedIn profile, enrich the data, and help me to better prepare for the event.
Unfortunately, the results were worse than expected just on the first task:
- Out of 86 people, both Manus and Suna stopped processing around the 40-person mark—yet still reported as “completed”. Nearly half of the matches were incorrect or hallucinated
- As for ChatGPT deep research, it never completed at all. It ran for 20 to 30 minutes, then just stalled. No results, no error—just wasted compute and time. The whole process felt unreliable and inefficient, which pushed me to look for a different approach.
Step 2: If I Had to Do It Myself
Since the tools didn’t work, I started thinking: What would I do if I had to do this manually? I tried to mimic my own decision-making process—but automate it.
That meant prioritizing LinkedIn search results using:
- Profiles with mutual connections (a trust signal)
- Founders and investors (since it was a founder-focused event)
- People located in San Francisco (shared local context)
Once I mapped out that logic, I used a browser automation agent to do the research for me: opening profiles, extracting variables, and simulating human browsing. The result? I matched 79 out of 86 profiles—far better than any AI tool. The few I couldn’t find were due to incomplete or vague names in Partiful.
Here’s a simplified snippet using the browser-use
agent:
import asyncio
from dotenv import load_dotenv
load_dotenv()
from browser_use import Agent, BrowserSession, Controller
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
class Person(BaseModel):
name: str
title: str
linkedin_url: str
image_url: str
common_connections: list[str]
controller = Controller(output_model=Person)
async def main():
browser_session = BrowserSession(
# Use a specific data directory on disk (optional, set to None for incognito)
user_data_dir='~/.config/browseruse/profiles/default'
)
csv_filename = "results.csv"
processed_count = 0
for i, person in enumerate(people):
try:
print(f"Processing {i+1}/{len(people)}: {person}")
agent = Agent(
task=f"""
Help me to find {person}'s linkedin profile
Rules:
- search in linkedin.com since I'm logged in
- use the search url directly:
- example: https://www.linkedin.com/search/results/people/?keywords=<name>&origin=SWITCH_SEARCH_VERTICAL&sid=1bG
- priotize the result who I have mutual connections with
- priotize founders and investors
- priotize the result who is in San Francisco, CA
Output the name and linkedin profile url of the person
""",
llm=ChatOpenAI(model="gpt-4o-mini"),
browser_session=browser_session,
controller=controller
)
history = await agent.run()
result = history.final_result()
person_result = Person.model_validate_json(result)
# Write result incrementally to CSV
write_result_to_csv(csv_filename, person, person_result, is_first_row=(processed_count == 0))
processed_count += 1
except Exception as e:
print(f"✗ Error processing {person}: {e}")
# Write error row to CSV to keep track of failures
error_result = Person(
name=f"ERROR: {person}",
linkedin_url="",
image_url="",
common_connections=[]
)
write_result_to_csv(csv_filename, person, error_result, is_first_row=(processed_count == 0))
processed_count += 1
continue
print(f"\nCompleted! Processed {processed_count}/{len(people)} people. Results saved to {csv_filename}")
asyncio.run(main())
Yes, LinkedIn scraping is a gray area. But I kept everything local, credential-free, and rate-limited—essentially mimicking a human browsing pattern for personal research.
Step 3: Enrich, Summarize, and Prep
Once I had the LinkedIn URLs, I aggregated data using a mix of third-party APIs and information directly from each profile—job titles, histories, company context, recent news, and mutuals. Then I used AI to summarize the highlights.
Instead of showing raw data, I used LLMs to pull out the most relevant insights. That way, I could quickly understand someone’s background without sifting through a wall of text.
For this part, I used AI to help me build a simple yet effective UI that made the data much easier to consume. Instead of scrolling through overwhelming lists or spreadsheets, I had clean, organized visual cards that made reviewing and exploring people seamless. This reinforced how much design still matters—even in AI workflows.
For deeper dives, I used Zoza to generate detailed company and person reports. These gave me even more context and made it easier to prep strategically.
Step 4: Deepening Context with GPT Integration
Once I had rich summaries and context in the UI, I found there were moments where I still needed more depth.
Instead of building full follow-up logic into the product (I only had ~4 hours to ship this), I added a simple “copy” button that pulled together all of a person’s info. I could then paste it directly into ChatGPT, where I had a custom GPT designed to help me brainstorm follow-up questions and conversation angles.
This lightweight integration turned out to be super effective. It helped me clarify talking points, explore interesting threads, and feel mentally organized before walking up to someone.
Outcome: What This Helped Me With at the Event
Having this system in place really paid off. Here’s how it helped me on the ground:
- Conversation starters at a glance – Being able to quickly glance at a person’s card gave me enough background to spark meaningful conversation. For example, if someone’s company had just been acquired or launched a new product, that became a great entry point.
- Common connections – Seeing who we both knew added context and often made intros smoother. I could mention mutual friends or ask how they knew someone, which instantly made things feel more familiar.
- Shared interests – While I didn’t have enough time to tune this signal properly, I can already tell how powerful it would be for deeper bonding. There’s a lot of room to explore here in future versions.
- Intuitive, simple UI – Having a clean UI to summarize and then optionally dive deeper made all the difference. It helped reduce information overload and made me feel way more prepared between conversations.
Overall, I’m not someone who naturally thrives at large social events, but having this tool boosted my confidence and made the whole experience more enjoyable.
What I Learned
Here are the biggest lessons from the experience:
-
Local-first gave me control and safety – Because I had to log into LinkedIn, running the automation locally felt much safer. Cloud-based tools are more likely to get flagged due to shared IPs or bot-like patterns. But even more important was control: I could see exactly what was happening and step in if needed.
-
Out-of-the-box AI agents often lack transparency and reliability – Tools like Manus and Suna generated code and executed tasks, but offered little visibility or control. If something failed—or hallucinated—I had no way to revise or rerun the logic easily. I wanted to be part of the loop, not just press a button and hope. The most frustrating part was the wait-and-fail cycle. With ChatGPT deep research, for example, it ran for 20 to 30 minutes and then simply stopped. No output. No error. Just silence. That kind of experience—waiting for something that never delivers—felt like a complete waste of time and compute.
-
Good UI turns raw data into actionable insight – No matter how advanced the backend is, if the interface isn’t intuitive, the whole system suffers. My custom-built UI helped me process and act on the data with speed and clarity. It reminded me that usability is not a luxury—it’s essential.
-
Pairing structured context with conversational AI is need – Having detailed, structured information is great—but sometimes you need flexibility to explore it dynamically. The simple act of piping a person’s profile into ChatGPT gave me an easy way to ask follow-up questions, brainstorm talking points, or prep angles before a meeting. It reinforced that combining structured data with natural language interfaces unlocks a powerful prep workflow.
What I Wish Existed
One big missing piece: a persistent, AI-native knowledge base of the people I meet—powered by agents that actually remember.
Right now, everything I built was scoped to a single event. But what I really need is a system that continues to learn and evolve. Imagine a multi-agent system where each agent helps me manage my relationship graph:
- One agent keeps track of how I know someone and when we last interacted.
- Another surfaces context or notes before I see them again.
- A third suggests action items—like following up, sending an article, or reconnecting.
Each of these agents would operate with memory, context, and collaboration—helping me nurture relationships over time without me having to micromanage it all.
That’s the next step in what I imagine as a truly AI-first application: not just reactive, but proactive, personalized, and context-aware.