Want to revolutionize your business with AI? CheckΒ SSW's Artificial Intelligence and Machine Learning consulting page.
"Vibe coding" is a trend that has taken the software development world by storm recently. It means developing via a coding agent, and never even looking at - let alone editing - the code. It has also become synonymous with low-quality code π.
When writing code as a professional developer, "vibe coding" may make it easy to get a solution up and running without worrying about the details, but as soon as you commit it to the repository under your name, it becomes your responsibility, as if you had written it yourself.
GitHub Copilot CLI is incredibly powerful, but giving AI deep access to your terminal and file system can be concerning. When you use features like
--allow-all-tools- which approves all actions - Copilot can execute commands on your behalf, which means one wrong suggestion could have serious consequences.Running Copilot CLI in a secure Docker container provides the best of both worlds: powerful AI assistance with strict security boundaries that limit the "blast radius" of any potential mistakes.
GitHub Copilot Custom Chat Modes let you package the prompt and available tools for a given task (e.g. creating a PBI) so your whole team gets consistent, highβquality outputs.
Without a chat mode, individuals might copy/paste prompts . Important acceptance criteria or governance links get lost. New starters don't know the βstandard wayβ and quality varies.
The advent of GPT and LLMs have sent many industries for a loop. If you've been automating tasks with ChatGPT, how can you share the efficiency with others?
GPT is an awesome product that can do a lot out-of-the-box. However, sometimes that out-of-the-box model doesn't do what you need it to do.
In that case, you need to provide the model with more training data, which can be done in a couple of ways.
When you're building a custom AI application using a GPT API you'll probably want the model to respond in a way that fits your application or company. You can achieve this using the system prompt.
AI agents are autonomous entities powered by AI that can perform tasks, make decisions, and collaborate with other agents. Unlike traditional single-prompt LLM interactions, agents act as specialized workers with distinct roles, tools, and objectives.
Repetitive tasks like updating spreadsheets, sending reminders, and syncing data between services are time-consuming and distract your team from higher-value work. Businesses that fail to automate these tasks fall behind.
The goal is to move from humans doing and approving the work, to automation doing and humans approving the work.
There's lots of awesome AI tools being released, but combining these can become very hard as an application scales. Semantic Kernel can solve this problem by orchestrating all our AI services for us.
When using Azure AI services, you often choose between Small Language Models (SLMs) and powerful cloud-based Large Language Models (LLMs), like Azure OpenAI. While Azure OpenAI offer significant capabilities, they can also be expensive. In many cases, SLMs like Phi-3, can perform just as well for certain tasks, making them a more cost-effective solution. Evaluating the performance of SLMs against Azure OpenAI services is essential for balancing cost and performance.
- Do you know the best workflow for AI assisted development?
- Do you use GitHub Copilot CLI in a secure dockerised environment?
- Do you create reusable GitHub Copilot Chat Modes?
- Do you create custom GPTs?
- Do you know how to train GPT?
- Do you use GPT API with system prompt?
- Do you build agentic AI?
- Do you automate tasks with low-code tools and AI?
- Do you use Semantic Kernel?
- Do you evaluate SLMs for performance compared to Azure AIβs cloud-based LLMs?
- Do you pick the best Large Language Model for your project?
- Do you write integration tests for your most common LLM prompts?
- Do you know the best chatbot for your website?
- Do you know how you can leverage the ChatGPT API?
- Do you know how to embed UI into an AI chat?
- Do you use AI-powered embeddings?
- Do you use AI tools in your prototype development?
- Do you use AI tools in your prototype development?
- Do you build hallucination-proof AI assistants?
- Do you handle AI Hallucinations the right way?
- Do you provide an llms.txt file to make your website LLM-friendly?
- Do you know your Dataverse AI options?
- Do you keep task summaries from AI-assisted development?
- Do you attribute AI-assisted commits with co-authors?
- Do you configure AI assistants to keep all working files inside the repository directory?