Possible Mentor(s): Parker Lougheed (parlough), Kevin Moore (kevmoo), John Ryan (johnpryan)
Difficulty: Hard
Project size: Large (350 hours)
Skills: Dart, TypeScript/JavaScript, LSP (Language Server Protocol), Monaco Editor API
Description: Currently, DartPad’s autocomplete is limited to "Member Completion" (triggered by the . operator). While typing text. yields robust results, typing Te at the top level does not suggest Text, TextAlign, or other SDK classes. This creates a high barrier for beginners who must memorize class names before they can see property suggestions.
This project aims to bridge the gap between the DartPad web environment and the full IDE experience by:
Transitioning to LSP: Evaluating and implementing a migration from the legacy Analysis Service to the Language Server Protocol (LSP) for backend communication.
Global Symbol Indexing: Developing a mechanism to efficiently query the Dart/Flutter SDK for top-level symbols without overwhelming the browser or the server.
Monaco Integration: Optimizing the Monaco Editor configuration to provide "IntelliSense" style suggestions on every alphanumeric keystroke.
Good Sample Project: Create a simplified web-based editor using the Monaco Editor and a mock LSP client.
The Task: Implement a client-side "Warm Cache" that stores the top 100 most used Flutter Widgets.
The Goal: When a user types Sc, the editor should instantly show Scaffold and ScrollController from the cache, while simultaneously firing an async request to a mock server to fetch less common results.
Bonus: Reference my previous contributions to dart-pad (e.g., Issue #3562) to show familiarity with the current codebase.
Expected outcome: A version of DartPad where users get instant, contextual suggestions for any Dart or Flutter class/constructor at the top level, significantly improving the "time to first run" for new developers.
Hi everyone,
Just a quick update on this proposal concept. Over the weekend, I spent some time deep-diving into the actual dart-pad and dart-services repositories to map out the implementation.
I realized my initial pitch assumed a Monaco/LSP architecture, but I see now that DartPad is deeply integrated with CodeMirror (specifically in dartpad_ui/lib/app/editor/web/editor_service.dart) and uses a custom /complete wrapper for the Analysis Server (dart_services/lib/src/analysis.dart).
Given the 350-hour GSoC timeframe, migrating the entire editor to Monaco and the backend to LSP is likely out of scope. Therefore, I am pivoting my official proposal to work within the existing architecture while still solving the core problem (the "time to first run" friction for beginners).
My revised approach focuses on:
Client-Side "Warm Cache": Injecting a local cache of top-level Flutter widgets directly into the frontend to provide zero-latency hints.
CodeMirror Auto-Triggering: Reconfiguring the CodeMirror onKeyPress events in editor_service.dart to intelligently auto-trigger showCompletions on alphanumeric input at the top level, merged asynchronously with the /complete backend results.
Analysis Server Optimization: Fine-tuning the getSuggestions2 calls to handle the increased frequency of autocomplete requests without lagging the server.
I'm currently putting together a small CodeMirror-based proof-of-concept for the warm cache to include in my final proposal document. I'll share the draft proposal soon!
Best, Naman
Hi everyone,
Following up on my last update, I have completed the full draft of my GSoC proposal for the Hybrid Contextual Autocomplete architecture.
Draft Proposal Link: https://docs.google.com/document/d/1hyzcu1ik1-LFdO_jCHI3qnZS9VABQdv613R2ijoXwZg/edit?usp=sharing
As promised, I also built the proof-of-concept for the Warm Cache + Async Merge algorithm. To make it easy to test, I wrote it as a self-contained Flutter app that runs directly inside DartPad itself. It demonstrates the debounced trigger, the zero-latency local cache, and the deterministic merge of delayed network results without UI jitter.
You can find the runnable Gist link under the "Good Sample Project" section of the proposal.
I would be incredibly grateful for any feedback from the mentors on the technical scope, timeline, or edge cases I might have missed. The document is open for comments.
Thanks again for your time and guidance!
Best, Naman