Qodo’s cover photo
Qodo

Qodo

Software Development

New York, NY 10,114 followers

About us

Qodo is the enterprise platform for AI-driven code review, designed to help engineering teams keep pace with the velocity of coding. As AI accelerates development, Qodo ensures quality scales alongside it. Our multi-agent platform integrates deep code base understanding, automated rule enforcement and agentic review intelligence to deliver context-aware code reviews across the SDLC. Its agents handle PR review, in-IDE feedback, background remediation, to ensure issues are caught early, fixes are validated, and standards are consistently enforced.

Website
https://www.qodo.ai/
Industry
Software Development
Company size
51-200 employees
Headquarters
New York, NY
Type
Privately Held
Specialties
AI, ML, code, code integrity, unit test, functional code quality, generative AI, generative code, copilot, and chatgpt

Locations

Employees at Qodo

Updates

  • Qodo reposted this

    Top Developer Tools You Can Use in 2026 1 - Code Editors & IDEs: These tools help developers write, edit, and debug code with greater efficiency. Examples are Visual Code, IntelliJ IDEA, PyCharm, Cursor, Eclipse, etc. 2 - Version Control Systems: Track code changes over time and enable collaboration between team members. Examples are Git, GitHub, Gitlab, Bitbucket, AWS Code Commit, etc. 3 - Testing Tools: Help ensure that code behaves as expected by identifying bugs before they reach production. Examples are JUnit, Selenium, Playwright, Cypress, etc. 4 - CI/CD Tools: They help automate the process of building and deploying code to speed up delivery. Examples are Jenkins, GitHub Actions, Circle CI, Travis CI, and Code Pipeline. 5 - Containerization and Orchestration: Helps package applications and their dependencies into containers to make sure they run across environments in a consistent manner. Examples are Docker, Kubernetes, Podman, Containerd, Rancher, etc. 6 - Project Management Tools: Helps the development teams plan, organize, and track the development of tasks. Examples are JIRA, Asana, Trello, ClickUp, Notion, etc. 7 - API Testing Tools: They help test and validate APIs to ensure stable communication between services and with external consumers. Examples are Postman, Swagger, Hopscotch, Insomnia, etc. 8 - AI-Powered Developer Tools: They are mainly used to boost developer productivity with code suggestions, error detection, and automated code generation. Examples are ChatGPT, Claude Code, Cursor, Copilot, Qodo, etc. Over to you: Which other tools have you used? -- Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://lnkd.in/g9wAgcke #systemdesign #coding #interviewtips  .

    • No alternative text description for this image
  • View organization page for Qodo

    10,114 followers

    Anthropic just published one of the most comprehensive guides we've seen on evaluating AI agents in production. The core insight: The capabilities that make agents useful (autonomy, intelligence, flexibility) are the same capabilities that make them difficult to evaluate! When AI generates thousands of lines of code, traditional review processes break down. You need infrastructure for verification, not just good intentions. Key frameworks covered: Three types of graders: → Code-based: Binary tests, static analysis, outcome verification → Model-based: Rubric scoring, natural language assertions → Human: Expert review for calibration. When to build evals: Teams that wait until they're "stuck in reactive loops" spend weeks on features that work in testing but fail unstated requirements. Teams that build evals early can verify changes against hundreds of scenarios before shipping. The compounding value: Once evals exist, you get regression tests, baseline metrics, and faster model adoption for free. When more capable models release, teams with evals can upgrade in days. Teams without them face weeks of manual testing. The piece includes specific guidance for coding agents, conversational agents, and a shout-out to Qodo's new agentic eval framework. Read more: https://lnkd.in/dUuwuPR3

  • View organization page for Qodo

    10,114 followers

    The vibe coding experiment failed in 2025. Not because AI can't generate code. It can, and it's fast. The failure was simpler: we built speed without the infrastructure to safely deploy it. When AI generates thousands of lines in minutes, but your team still reviews code one PR at a time, you've just moved the bottleneck. Speed without verification becomes a liability. Here's what we learned across enterprises from Salesforce to Walmart: The bottleneck isn't code generation. It's code verification. Teams that invested in verification layers—automated standards enforcement, context-aware review, quality gates that enforce outcomes—saw measurable improvements. Code review time dropped. Defects got caught pre-merge. Standards applied consistently across distributed teams. Teams that skipped verification? Technical debt accumulated. Review became the constraint. Security vulnerabilities reached production. The data backs this up: Among teams using AI for code review, code quality improvements jump to 81%. Without proper review infrastructure, that drops to 59%. The difference is the verification layer. We wrote about what actually moved the needle in 2025—from automated standards enforcement to progressive rigor frameworks—and what it means for teams shipping AI-generated code at scale in 2026. Read the full post → https://lnkd.in/ebV5dSMK

  • View organization page for Qodo

    10,114 followers

    We love this framing Itamar Friedman! Code generation is quickly becoming table stakes, but code integrity is where real enterprise value (and trust) is won. That’s exactly what we’re building at Qodo: a system that learns and enforces org standards for quality, reliability, and security so teams can scale AI-assisted development without scaling risk. Excited for what's coming in 2026...

    2025 has been a defining year for the software development world, and for Qodo. 2026 is going to be huge. A few things that are top of mind. 2025 was a turning point for how code is written. Many development organizations seriously committed to AI-assisted development, and that momentum is only accelerating into 2026. LLMs improved dramatically. For most practical purposes, they’re no longer the bottleneck. Code generation is rapidly becoming commoditized: a meaningful capability embedded in almost every leading development tool: IDEs, design tools, issue trackers, agents, SRE tooling, and more. But here’s the gap. To get a real 2× productivity boost (let alone the 10–100× we’ve all promised our CTOs, CEOs, and ourselves), something critical is still missing: An AI system that automates and standardizes code integrity Quality. Reliability. Security. I’m increasingly convinced that the next real step in heavy-duty, real-world software development won’t come from a single model or a clever prompt (or even a few of them). It will come from systems. Systems that continuously learn, surface, and enforce STANDARDS, BEST PRACTICES, and COMPLIANCE. Imagine a technical product manager pushing code to production without having HIGHLY CREDIBLE, dedicated assurance that the software won’t break. That’s what 2026 is really about solving this. And that’s the missing quality system we’re building at Qodo. In software especially, intelligence without quality doesn’t compound. It creates risk. That belief is why we started Qodo. If we want to automate software development, and enable many more “common citizens” inside the enterprise to contribute directly, we must pair that with AI systems that verify code integrity. Code integrity (and processes like code review that enable it) is orders of magnitude harder than code generation. But that’s exactly where we’re seeing real product and technical breakthroughs heading into 2026. And when we succeed, it won’t be because of one breakthrough. It’ll be because of the people. How we think. How we operate under uncertainty. How we make decisions when there’s no clear precedent. There’s no real playbook for what we’re building. We ship things that don’t fit cleanly into existing categories like “code generation.” That requires people who are comfortable moving forward with incomplete information, and still raising the bar. Because of that, I’m especially drawn to people who are deeply passionate about making a difference, e.g. those who’ve built something from scratch, or made the impossible possible, even if in something small. That mindset maps directly to the team we’re building at Qodo. If this resonates, I’d love to hear from you. Email me at itamar@qodo.ai with: a couple of bullets on your beliefs, and one or two hard things you’ve actually built, shipped, or survived. 2026 is going to change how we build real-world software. At Qodo, we’re proud to be building the missing quality layer.

    • No alternative text description for this image
  • View organization page for Qodo

    10,114 followers

    AI that writes your code shouldn’t review it, too. When the same model generates and “reviews,” you don’t get a second opinion… you get confirmation bias at scale. When humans review code, they instinctively switch from “get it working” mode to “how could this fail?” mode. Your AI workflows should, too. Separate code generation from critical code review, or you’re just automating your blind spots. At Qodo, we’re building that second brain for software development, where AI code review provides the independent step you need for quality and governance.

  • View organization page for Qodo

    10,114 followers

    New year, new Code Review habits 🎯 If your engineering team is kicking off 2026 by leaning harder into AI, here’s one resolution worth making: stop letting the same AI that writes code also review it. That’s not a second opinion, it’s confirmation bias at scale. In our latest post, we break down what’s not working (and how to fix it): ✅ Generation and review optimize for different goals. Speed vs. adversarial thinking and “what could fail?" ✅ When AI reviews its own output, quality can slip fast: GitClear’s 2024 analysis (211M LOC) found an 8× increase in duplicated code blocks, plus higher vulnerability risk after repeated AI “improvement” cycles. ✅ The fix is architectural: separate the concerns; use a dedicated review solution with fresh context and well defined review and policy standards. If you want AI to raise the bar (not rubber-stamp changes), the review layer needs to be independent, context-aware, and built to challenge, not agree. Read the full post here: https://lnkd.in/e932Ryfb

  • View organization page for Qodo

    10,114 followers

    Just before we wrap up 2025, we’re excited to welcome our new Q4 Qodo team members!  💫 Sales Superstars: James Wise, Maor Vadler, Guy Vago 🎗️, Wallon Walusayi  🚀 Marketing Experts: Kerry Farrell, Nastasha Casale, Vladlena Mitskaniouk, Alivia Carter  🥷 Support Ninjas: Thomas Helderop, Sean Haimovich 🧠  Engineering Wizards: Amir Zaushnizer, Avrik Berman, Eilon Baer, Karen Y. From Israel to North America, we’re delighted to have you on board! Here’s to a 2026 filled with growth, new talent, and new opportunities as we continue building the Qodo team.

    • Qodo team
  • View organization page for Qodo

    10,114 followers

    AI tools are incredible at speed. But, they're not great at understanding why your team structured code that way, what architectural decisions matter, or which shortcuts will cost you later. The result? Technical debt that compounds faster than you can pay it down. The pattern: → AI writes functional code fast → Reviews can't keep pace → Context-blind shortcuts slip through → Debt accumulates silently → Teams slow down despite "faster" tools But speed and quality aren't opposites. You just need to treat AI-generated code like code from a dev who's never seen your codebase: review it with full context, catch architectural drift early, validate it fits your standards. The fix is smarter review processes. Read more about staying ahead of AI-generated technical debt and why "ship fast, fix later" doesn't scale when AI writes 65% of your commits. → https://lnkd.in/eunVaK2c

  • View organization page for Qodo

    10,114 followers

    AI code is flooding PRs and the real question for engineering leaders isn’t “which model is best?” It’s: how are we measuring code-review quality in a way that matches real production risk? Today's blog by Qodo engineering leader Bar Fingerman highlights what makes a good code review benchmark for AI tools, and here are 3 takeaway to consider heading into the new year: PR-level realism beats toy tasks. Benchmarks should reflect real pull requests, multi-file context, and the kinds of cross-cutting issues that actually break production, not just lint-level findings. Measure resolution, not only detection. Finding an issue is table stakes. Leaders should demand benchmarks that evaluate whether the tool suggests a correct, codebase-aligned fix that developers can actually ship. Score the developer experience + trust. Go beyond precision/recall and include signal-to-noise, conciseness, turnaround time, and reproducibility—because “accurate but unusable” won’t scale. If you’re evaluating AI code reviewers in 2026 planning, this is a helpful framework to pressure-test vendor claims. What’s the one metric you’d insist on before rolling out AI code review to your teams?  https://lnkd.in/e-YvfXMH

  • View organization page for Qodo

    10,114 followers

    In the age of infinite code, the real bottleneck isn't writing code; it’s reviewing it for production. At Qodo, we’ve learned that achieving the highest quality in pull request reviews requires a deliberate, multi-model strategy. We don't just pick the latest model; we benchmark every LLM against real-world, multi-language PRs to find the ones that behave like senior human reviewers. Here is why we selected our three default "Reviewers" for the enterprise: -- GPT-5.2: Chosen for its unmatched depth in complex reasoning. It handles the heavy lifting—finding high-impact bugs and logic gaps that smaller models miss. -- Gemini 2.5 Pro: We picked this for its superior format reliability and stable fixes. While other models can be unstable in "preview" mode, Gemini 2.5 Pro provides the consistent, production-grade feedback teams need. -- Claude Haiku 4.5: Chosen for routine tasks where speed and precision matter most. It eliminates the "overthinking" that often degrades output quality in larger models, keeping reviews fast and cost-efficient. Why not just one model? Because a production-ready review is multifaceted. By deploying a layered architecture, Qodo ensures that computationally intensive analysis is handled by heavyweight models, while routine checks stay fast and low-noise. Learn more about our benchmarking process from Ofir F. https://lnkd.in/e2Uftu_r

  • View organization page for Qodo

    10,114 followers

    Why AI Velocity Demands a New Standard for Production? We have reached a breaking point in software engineering. AI can now generate 1,000 lines of code in seconds, but reviewing that volume at the pace required to maintain a healthy production environment is nearly impossible. Let’s be honest: reviewing massive, AI-generated PRs is frustrating. When developers are overwhelmed by volume, the temptation is to offer a quick "Looks Good To Me" (LGTM) and leave the rest to the build workflows. But "it compiles" is not the same as "it’s production-ready." Hear more from Itamar Friedman...

  • View organization page for Qodo

    10,114 followers

    88% of developers don't trust AI-generated code enough to deploy confidently. The problem isn't the AI. It's the context gap. 33% of ALL AI tool improvement *requests* focus on one thing: better context awareness. Think about that...Not better code generation... Not faster models. They want Context. Because without understanding your codebase, best practices, team norms, and project architecture, AI just generates plausible code. Better context = better quality. Every time. What context gaps are you seeing with AI-generated code? www.qodo.ai

  • View organization page for Qodo

    10,114 followers

    "If you want to push coding autonomy higher, you MUST invest in the verification layer." Dedy Kredo nailed it in last week's webinar: You can't have Agentic AI without an automated way to verify the agent didn't hallucinate a vulnerability, duplicate existing code, or break a dependency. Verification is the prerequisite for AI autonomy. 🛡️ Catch the full discussion with Dedy, Clinton Herget, Benjamin Stice, and Yonatan Boguslavsky on what actually moved the needle for engineering leaders in 2025! https://lnkd.in/em9KEbk3

  • View organization page for Qodo

    10,114 followers

    Join us TODAY at 1:00pm ET for a Masterclass in AI Software Security & Quality! Danny Allan, CTO at Snyk, and Itamar Friedman, CEO & Co-founder of Qodo will join ESG Analyst Tyler Shields for a fireside chat to discuss best practices for building secure, high quality software in the age of AI. More details and registration here: https://lnkd.in/eiDkwdmT

  • View organization page for Qodo

    10,114 followers

    Code faster. And review…slower? That's the new pattern. AI coding tools hit 82% adoption. Productivity is up 97.8%. But review time increased 91.1%. The gap between "AI wrote it" and "production-ready" is widening: -- 65% of respondents are making commits that are AI-influenced -- Only 28% of developers trust their AI code -- 65% say AI misses relevant context The bottleneck shifted: Generation is solved. Verification isn't. Manual reviews can't scale at AI velocity. Quality processes lag behind output. The infrastructure for confidence doesn't exist yet. Check out the data: State of AI Code Quality 2025: https://lnkd.in/dhWUyt_t What's breaking in your workflow?

  • View organization page for Qodo

    10,114 followers

    Heading into 2026, software maintainability has to be top of mind for enterprise engineering leaders as AI-accelerated code volume, rising complexity, and mounting technical debt collide. In this blog post, Nnenna Ndukwe ~ AI and Emerging Technology digs into why maintenance is the real bottleneck (soaking up huge amounts of developer time and IT budgets) and how context-aware AI can finally help teams manage sprawling, AI-influenced codebases without drowning in technical debt. When AI understands your architecture, services, patterns, and naming conventions, its reviews stop being generic suggestions and start becoming relevant, trustworthy guidance. That context is what makes AI code review sustainable long term; it helps teams evolve safely, preserve intent, and avoid turning “quick fixes” into tomorrow’s legacy mess. If you’re thinking about how to keep your systems fast to change and safe to evolve in the AI era, this is a must-read: 👉 The Future of Software Maintainability: Context-Aware AI for Enterprise Codebases https://lnkd.in/eYXgaQqU

  • View organization page for Qodo

    10,114 followers

    It’s that time of year - engineering leaders are reflecting on what worked during a year of AI transformation and are setting new priorities for the next. We're bringing together engineering leaders from Qodo, Salesforce, Snyk and Port.io to have a candid conversation about what actually moved the needle in 2025, and what's on deck for 2026. We’ll discuss: 💡 What we're leaving behind: Tools and practices we're sunsetting without disrupting delivery 💡 What we're doubling down on: Investments that measurably improved quality, velocity, and developer experience 💡 What we're betting on next: 2026 experiments for reliability, security, and growth Join us TOMORROW at 12:00pm ET! Register here: https://lnkd.in/eXbRbtmv

    • No alternative text description for this image
  • Qodo reposted this

    View organization page for Snyk

    107,789 followers

    Join our upcoming webinar with Danny Allan Snyk CTO, Itamar Friedman, Qodo CEO and Co-Founder, and Tyler Shields, Principal Analyst, Omdia to learn how to secure the AI-driven software development lifecycle from end to end. In this webinar, we’ll cover: ⚡ How AI changes the threat landscape 🔍 Key risk points across the SDLC 🛡️ Strategies to identify and mitigate AI-specific vulnerabilities 🚀 How teams can build fast and stay secure Don’t miss it — save your spot! 🗓️ December 15, 10:00am PST https://lnkd.in/eiDkwdmT #AISecurity #AppSec #DevSecOps

  • Qodo reposted this

    Elana Krasner from Qodo shows how their code review IDE uses GPT-5.1-Codex-Max to trace module interactions, surface a real runtime risk, and apply a precise fix directly in the IDE. Qodo’s early tests showed GPT-5.1-Codex-Max beating internal benchmarks by a wide margin: ⚡ As fast and affordable as GPT-5.1 🧠 Much smarter with more efficient token usage Thanks to the Qodo team for building with OpenAI!

  • Qodo reposted this

    View profile for Nnenna Ndukwe ~ AI and Emerging Technology

    AI Developer Relations Lead @ Qodo | Helping Engineers Build High-Integrity Software with AI | MIT x Harvard Speaker

    After countless conversations with engineers, tech leads, managers, and directors of engineering at AWS re:Invent, it's clear to me that we are truly witnessing a major opportunity for AI to help dev teams with these use cases: - managing legacy codebases - reducing technical debt - education/insights on obscure architecture that has lost tribal knowledge over time In some cases, innovation is about making it easier to maintain the old, hard things in industries that take much longer to catch up to what's "cool" and modern in tech stacks.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • View organization page for Qodo

    10,114 followers

    Join us next Weds December 10th for an end of the year roundtable with engineering leaders who have navigated AI transformation this year. No buzzwords, just honest conversation about what worked and what didn't. Our panelists include Benjamin Stice, engineering leader from Salesforce; Clinton Herget, Field CTO from Snyk; and Dedy Kredo, Co-founder at Qodo. We look forward to discussing: -- What we're leaving behind: Tools and practices we're sunsetting without disrupting delivery -- What we're doubling down on: Investments that measurably improved quality, velocity, and developer experience -- What we're betting on next: 2026 experiments for reliability, quality, and growth Register today:  https://lnkd.in/eXbRbtmv

  • View organization page for Qodo

    10,114 followers

    Today’s a big day for us at Qodo! We’re excited to announce the Qodo AI Code Review Platform 🚀 AI is helping teams ship faster than ever, but there’s a gap the industry doesn’t talk about enough: We’re generating more code, with less human attention per line. That’s a recipe for tech debt, brittle systems, and subtle bugs that only show up in production. With Qodo, we’re not trying to replace engineers or traditional reviews. We’re making them safer, faster, and more focused on what matters: 🧠 Understanding your codebase – Qodo indexes multi-repo codebases into a shared knowledge layer so agents can understand system behavior, trace dependencies, and uncover risks that span services. 🛠️ A System of Quality Agents – Qodo is built on a multi-agent system in which each agent carries out a specific quality-focused task, such as detecting issues, surfacing breaking changes, enforcing rules, or identifying code duplication. 🔍 Local Code Review – Qodo enables customers to run high-signal reviews before code is pushed. As AI-generated code becomes the norm, we believe the conversation has to shift from “How fast can we ship?” to “How confident are we in what we ship?” If you’re thinking about how to keep quality high while your codebase (and AI usage) explodes, we’d love to connect, show you the Qodo AI Code Review Platform, and learn how your team is approaching this challenge. 💬 Read more here: https://lnkd.in/e4GXBCMK

  • View organization page for Qodo

    10,114 followers

    Measuring engineering productivity in large, distributed teams is hard, and the usual “output” metrics don’t cut it. In our latest blog, we break down a practical, evidence-based approach that balances speed, quality, and developer experience: -- Outcomes over output: Measure customer impact and reliability, not lines of code. -- Quality + speed together: Pair DORA-style flow metrics with code review SLOs (e.g., p50/p75 time to first response). -- Instrument the PR lifecycle: Track where work stalls (draft → review → re-review → merge) and fix the bottlenecks. -- Listen to devs: Run lightweight DX surveys and correlate sentiment with delivery data. -- AI with guardrails: Standardize code quality checks so AI-accelerated changes stay safe and compliant. 👉 Read the post for the full playbook: https://lnkd.in/eiqV-WZr and if you're attending AWS re:Invent, come visit us for a live demo of how AI powered code review can improve the metrics above! 

  • View organization page for Qodo

    10,114 followers

    As we head into the holiday weekend 🦃, a huge thank you from all of us at Qodo to our customers, partners, and community. Your trust, feedback, and collaboration push us to raise the bar every day, focused on improving the quality of how great software is built. We’re grateful for the chance to build with you, learn from you, and celebrate the wins together. 🙏✨

  • View organization page for Qodo

    10,114 followers

    Before you head out for the holiday weekend…lock in your AWS re:Invent plans 🎯 Meet Qodo next week to see how AI Code Review can raise quality and speed, AND snag a VIP bracelet to our happy hour at Morimoto (MGM Grand) with live music from DJs Alick Friedman ☁️ & Dana Fine! 🎧 3 quick steps: -- Book a booth meeting: https://lnkd.in/evJNffAG -- Pick a time that works -- Pick up your VIP bracelet at Qodo booth #1571! See you in Vegas!

    • No alternative text description for this image
  • View organization page for Qodo

    10,114 followers

    Codebase understanding beats guesswork. In the AI era, quality depends on how well your AI software development workflows understand the whole codebase: cross-repo history, dependencies, standards, and tests; not just the file in front of it. That’s how you cut noise, catch deeper risks, and deploy with confidence. We’re proud that Qodo was ranked highest for Codebase Understanding in the 2025 Gartner® Critical Capabilities for AI Coding Assistants, recognizing the importance of deep context understanding and high precision/recall for enterprise software development. If you want to learn more about how to scale AI software development safely, read more about the critical capabilities, such as codebase context, that are required. 👉 Get the report: https://lnkd.in/dxR865JG

    • No alternative text description for this image
  • View organization page for Qodo

    10,114 followers

    The Quality Crisis has Begun... Last week at AI Engineer Code Summit, Itamar Friedman unpacked the State of AI Code Quality and why leaders need systemic quality gates, not just faster suggestions. Itamar highlighted how: * 82–92% developer adoption of AI coding tools * ~3× productivity boost in writing code * But 67% of teams report it’s getting harder to maintain quality What we’re seeing on the ground?? * AI code-gen adoption is exploding. * Speed ≠ quality: without context and policy, noise sneaks into production. * Most orgs still lack AI-era QA frameworks (standards, verification, evidence). Takeaway: Context beats guesswork. Teams need review systems that are local (IDE), codebase-wide (cross-repo context), and policy-aware so that every change carries evidence, not vibes. How is quality crisis impacting your dev team?

    • Speed does not equal quality in AI software development. Your engineering team needs to do both.
  • Qodo reposted this

    View profile for Dana Fine

    CNCF Ambassador 2023🦠 |GitHub Star🌟 | Empowering Women in the Dev Ecosystem.🚺

    #MSIgnite & QCon Software Development Conferences - What an incredible week across two major industry events for Qodo in San Francisco.🚀 We had few standout highlights at #MSIgnite thanks to Gilad Gershon, Natasha Pollayil, and Ashutosh Joshi - Itamar Friedman’s session was excellent, moderated by Jen Hoskins, and we've been honored at the NVIDIA booth - a true testament to NVIDIA's strong support. 🤩 And David Parry delivered a full-day AI workshop at QCon Software Development Conferences, leaving a strong impression on attendees. Special thanks to Suki Randhawa Ami Dalton, CMP, CPT, PPCES from Microsoft and Snir Levy for the incredible support and for getting us presenting at a special VIP pre-event reception. Thank you to our incredible Qodo team Jonathan Klick, Oran Shiner, June McClure, Ryan Relinger David Parry, Aaron Carlo Bloomfield, Doug Mosemann, and Alex Frisbie 🫡 If you want to meet us next — we’ll see you at AWS re:Invent! ⚡💙 #Qodo #MicrosoftIgnite #QCon #AI #Conferences

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +5

Similar pages

Browse jobs

Funding