<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title>Cline - Tag - Naifan Li's Blog</title><link>https://blog.omagiclee.com/tags/cline/</link><description>Cline - Tag - Naifan Li's Blog</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Wed, 01 Jan 2025 19:59:15 +0800</lastBuildDate><atom:link href="https://blog.omagiclee.com/tags/cline/" rel="self" type="application/rss+xml"/><item><title>Local AI Programming Assistant: VSCode + Continue/Cline + vLLM + Kimi-K2.5</title><link>https://blog.omagiclee.com/posts/toolkits/llm-inference-engines/vscode-with-continue-extension/</link><pubDate>Wed, 01 Jan 2025 19:59:15 +0800</pubDate><author>Naifan Li</author><guid>https://blog.omagiclee.com/posts/toolkits/llm-inference-engines/vscode-with-continue-extension/</guid><description><![CDATA[<h2 id="introduction">Introduction</h2>
<p>This document provides a comprehensive guide to integrating <a href="https://github.com/vllm-project/vllm" target="_blank" rel="noopener noreffer ">vLLM</a> with <a href="https://github.com/continuedev/continue" target="_blank" rel="noopener noreffer ">Continue</a> and <a href="https://github.com/cline/cline" target="_blank" rel="noopener noreffer ">Cline</a> to build a high-performance, low-latency local LLM programming assistant environment.</p>
<ul>
<li>vLLM leverages advanced inference technologies including PagedAttention for efficient memory management</li>
<li>Continue and Cline provide powerful AI-assisted coding capabilities directly within the VSCode environment through OpenAI-compatible API integration.</li>
</ul>
<h2 id="architecture">Architecture</h2>
<div style="display: flex; justify-content: center;">
<div class="mermaid" id="id-1"></div>
</div>
<h3 id="component-description">Component Description</h3>
<table>
  <thead>
      <tr>
          <th style="text-align: center">Component</th>
          <th style="text-align: center">Role</th>
          <th style="text-align: center">Key Features</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td style="text-align: center"><strong>vLLM</strong></td>
          <td style="text-align: center">LLM Inference Service</td>
          <td style="text-align: center">PagedAttention, Streaming, Prefix Caching</td>
      </tr>
      <tr>
          <td style="text-align: center"><strong>Continue</strong></td>
          <td style="text-align: center">VSCode AI Copilot</td>
          <td style="text-align: center">Code Completion, Summarization, Diagnostics, Refactoring</td>
      </tr>
      <tr>
          <td style="text-align: center"><strong>Cline</strong></td>
          <td style="text-align: center">AI Programming Assistant</td>
          <td style="text-align: center">Task Execution, Conversation, Multi-file Operations</td>
      </tr>
      <tr>
          <td style="text-align: center"><strong>OpenAI API</strong></td>
          <td style="text-align: center">Communication Protocol</td>
          <td style="text-align: center">Standardized Interface, Good Compatibility</td>
      </tr>
  </tbody>
</table>
<h2 id="installation--configuration">Installation &amp; Configuration</h2>
<h3 id="vllm-installation--configurationpoststoolkitsllm-inference-enginesvllminstallation"><a href="/posts/toolkits/llm-inference-engines/vllm/#installation" rel="">vLLM Installation &amp; Configuration</a></h3>
<h3 id="continue--cline-installation--configuration">Continue &amp; Cline Installation &amp; Configuration</h3>
<h4 id="install-continue--cline-extensions">Install Continue &amp; Cline Extensions</h4>
<ol>
<li>Open VSCode Extension Market (<code>Ctrl+Shift+X</code>)</li>
<li>Search for &ldquo;Continue&rdquo; or &ldquo;Cline&rdquo;</li>
<li>Click to install the extension</li>
</ol>
<h4 id="configure-continue--cline">Configure Continue &amp; Cline</h4>
<ul>
<li>
<p><strong>Continue Configuration</strong></p>]]></description></item></channel></rss>