Or try one of the following: 詹姆斯.com, adult swim, Afterdawn, Ajaxian, Andy Budd, Ask a Ninja, AtomEnabled.org, BBC News, BBC Arabic, BBC China, BBC Russia, Brent Simmons, Channel Frederator, CNN, Digg, Diggnation, Flickr, Google News, Google Video, Harvard Law, Hebrew Language, InfoWorld, iTunes, Japanese Language, Korean Language, mir.aculo.us, Movie Trailers, Newspond, Nick Bradbury, OK/Cancel, OS News, Phil Ringnalda, Photoshop Videocast, reddit, Romanian Language, Russian Language, Ryan Parman, Traditional Chinese Language, Technorati, Tim Bray, TUAW, TVgasm, UNEASYsilence, Web 2.0 Show, Windows Vista Blog, XKCD, Yahoo! News, You Tube, Zeldman
Python 3.14 is a rational constant | InfoWorld
Technology insight for the enterprisePython 3.14 is a rational constant 29 Nov 2024, 10:00 am
This (half-) month in Python and elsewhere: Get your feet wet with Pi-thon, er, Python 3.14, in its alpha 2 release! See how standalone Python app executables work with PyInstaller. Use Python’s pattern matching to unlock type-based decision-making superpowers. And Microsoft Azure has a brand-new, super-fast Python code sandboxing service.
Top picks for Python readers on InfoWorld
Python to C: What’s new in Cython 3.1
Type hinting for volatile and pointer types, and compatibility with Python’s free-threaded builds, are the big new attractions for the #1 Python-to-C library.
Azure Container Apps launches Python, JavaScript interpreters
Spin up Python (and JavaScript) apps faster than ever on Azure in custom code sandboxes. Or bring your own existing containerized code if you have it!
How to use PyInstaller to create Python executables
Redistributing Python apps as executables is tricky. PyInstaller makes it less tricky, but our walkthrough shows you all the techniques to use and traps to avoid.
How to use structural pattern matching in Python
The pattern-matching syntax introduced in Python 3.10 allows for powerful new programming techniques for decision-making in apps.
More good reads and Python updates elsewhere
Python 3.14 alpha 2 is out!
Try out the most bleeding-edge version of Python available, especially for the sake of features pushed forward from Python 3.13 like deferred evaluation of annotations.
NanoCube, an in-memory OLAP engine for DataFrames
See how this “minimalist … engine for lightning fast point queries on DataFrames” stacks up against DuckDB — and Polars, and Pandas, and Arrow, and SQLite.
Multipython: A Docker image with Python versions from 2.7 to 3.14
Get all your cross-version Python testing needs satisfied with a single docker pull
.
Pyloid: A Python-centric alternative to Electron or Tauri
Build Python-based, cross-platform desktop apps powered on the front end by QtWebEngine and PySide6.
Are we worse at cloud computing than 10 years ago? 29 Nov 2024, 10:00 am
Is it just me, or are we seeing more cloud project failures today than 10 years ago? Logic suggests we improve over time, but the metrics don’t support that assumption.
A cloud project 10 years ago typically involved migrating a few test programs and systems. Now, the systems involved are much more complex, with many more moving parts that affect multiple or all aspects of an enterprise’s operations. Today’s push toward AI means that complicated, data-intensive systems are now the preferred models for cloud systems. Due to the skills shortage and planning problems, these complex systems present significant obstacles to enterprise cloud adoption even on a good day.
We need to call in the A-Team to get cloud and AI projects done on time, done on budget, and done right. Unfortunately, the A-Team has a years-long waiting list. There are just not enough cloud migration and development skills to go around. Many organizations are settling for “less than ideal” talent who make incorrect calls and put cloud and AI projects on the path to failure.
Cloud migration projects fail or stall
Tech research giant Gartner states that 83% of all data migration projects fail and that more than 50% of migrations exceed their budget. Consider these additional statistics from SoftJourn: 50% of cloud migration projects either fail or stall, 56% of businesses encounter compliance and security challenges, and 44% of companies initiate their cloud migration with insufficient planning. This is not good news for anyone.
Let’s explore the key factors that contribute to this dismal success rate:
Inadequate planning. A lack of preparation leads to compatibility issues, unexpected costs, and technical roadblocks that could have been anticipated with proper assessment. Cloud projects have many dependencies. You need to pick a database before picking a development platform, and you have to determine performance requirements before moving to containers and microservices. I’m seeing more projects stall or fail due to the lack of simple planning.
The increasing complexity of IT systems. Organizations struggle with intricate IT architectures and interdependencies. I’ve covered the complexity issue to death because complexity is becoming the “silent killer” of cloud development and deployment. It can be managed, but it requires adequate planning (see the previous point).
The talent gap. The shortage of technical cloud expertise makes it increasingly difficult for organizations to execute and maintain cloud initiatives. This has become a critical bottleneck in cloud project success.
Uncontrolled cloud costs. Many organizations are seeing unexpected increases in post-migration operational expenses. The lack of adequate cost controls and automated mitigations leads to budget overruns and project failures. Projects coming in on budget are rare. Also, as I covered in my latest book, enterprises spend about 2.5 times the amount they budgeted to operate their cloud-based systems. Although finops can address some of these issues, strategic cost-planning problems are not being effectively managed.
Compliance and security challenges. Compared to earlier cloud adoption phases, today’s projects face a significant increase in security-related complications and compliance issues.
Post-migration application performance. The fact that many organizations struggle with application performance indicates that we need to be more effective at maintaining service levels during cloud transitions.
Keys to cloud project success
Rapid advancements in cloud technologies combined with mounting pressures for digital transformation have led organizations to hastily adopt cloud solutions without establishing the necessary foundations for success. This is especially common if companies migrate to infrastructure as a service without adequate modernization, which can increase costs and technical debt.
The growing pressure to adopt AI and generative AI technologies further complicates the situation and adds another layer of complexity. Organizations are caught between the need to move quickly and the requirement for careful, strategic implementation.
This decline in success rates is a critical warning sign for the industry. Our current approaches to cloud computing projects need serious reconsideration and improvement. The good news is there are ways to fix the problems. Focus on these key areas to address the most pressing cloud computing challenges:
Comprehensive planning. Include thorough application assessment, dependency mapping, and detailed modeling of the total cost of ownership before migration begins. Success metrics must be clearly defined from the outset.
A phased approach. Start with less critical applications and smaller projects to build expertise before scaling more challenging ones.
Skills development. Build internal cloud centers of excellence with targeted training programs and strategic partnerships with managed service providers.
Strong governance. Continuous monitoring and optimization processes must become standard practice, along with explicit operating models, robust cost management frameworks, and comprehensive security guidelines.
Modernize applications. When it comes to modernization, organizations must consider the appropriate refactoring and cloud-native development based on business value rather than novelty.
The overarching goal is to approach cloud adoption as a strategic transformation. We must stop looking at this as a migration from one type of technology to another. Cloud computing and AI will work best when business objectives drive technology decisions rather than the other way around.
Lessons from the past
This isn’t the first or last time we will face a crisis in IT. I’m old enough to remember when PCs first appeared in corporate offices. More than 90% of employees (including those in IT) had no idea what to do with them. To compound the PC problems, cell phones showed up in corporate offices soon after. The skills to seamlessly integrate these systems into enterprise operations didn’t exist. Most organizations stumbled around in the dark for much longer than they should have because they couldn’t find or train or afford the talent they needed to make things right. The typical rallying cry became, “Good luck, everyone!”
Cloud and AI are today’s iterations of PCs and cell phones. We will survive this crisis, but let’s learn from some of our past mistakes. If you can’t find the talent, be willing to invest in training. Put away the darts and dart boards. Build comprehensive short- and long-term planning into the budget for all aspects of your cloud and AI projects. Pursue projects that your current staff can handle. Study what’s broken and devise a viable plan to fix it. By fixing what’s broken, your staff should learn how to do things right the first time.
Easy, right? Good luck, everyone.
Exactly what would an AI-centric OS look like? 28 Nov 2024, 6:29 pm
Word that a group of former Google executives plans to build an operating system for AI agents underscores the fact that there is a distinction between what application platforms are doing with agents today and what an AI agent-first OS would entail, said industry analyst Brian Jackson.
Jackson, principal research director at Info-Tech Research Group, said Wednesday that the OSes we use today “were built for a software architecture that is file-based and designed to be used with a computer and a mouse. An OS that is built around AI could look at defining computing as data-based, with intelligent models that continuously learn and adapt based on new data.”
He made the comments following the announcement from venture capitalist CapitalG, Alphabet’s independent growth fund, that it is co-leading a $56 million seed round in /dev/agents, an organization co-founded by former Google vice president of engineering David Singleton, along with former colleagues Hugo Barra, Ficus Kirkpatrick, and Nicholas Jitkoff, all of whom had held senior positions in the company.
In a post on its website, CapitalG described the new company’s mission as “building the operating system that will arm developers with the tools they need to work with AI agents.”
Singleton, who will serve as CEO for /dev/agents, outlined the problem the company wants to solve on his LinkedIn page: “modern AI will fundamentally change how people use software in their daily lives. Agentic applications could, for the first time, enable computers to work with people in much the same way people work with people.”
He added that modern AI will not “happen without removing a ton of blockers. We need new UI [user interface] patterns, a reimagined privacy model, and a developer platform that makes it radically simpler to build useful agents. That’s the challenge we’re taking on.”
In its posting, CapitalG said the new company, which will be based in San Francisco, is building a team to “work closely with users and developers across the full stack (from tuning models up through to the UI layer) to solve a novel class of system and UX [user experience] problems created by AI.”
There are few details yet on how the new OS will be developed: The /dev/agents web site is definitely a work in progress.
Every tech vendor, said Info-Tech’s Jackson, “will always want to win over developers to use their platforms to solve different enterprise problems. The more developers creating functionality and extending your platform, the more value for customers and the more money for the platform.”
It is, he said, “a win-win-win (in theory). AI agents are just the next wave of where these tech vendors want to compete to get developers invested. But by focusing on building a new OS from the ground up, /dev/agents has the chance to fundamentally change our relationship with computers.”
In a new operating system, he added, the UI, “could be based on less abstracted modes of interaction like natural language, gestures, or even eye movements. And AI agents would be like the next generation of applications built on top of the OS, where developers could use tooling to combine those AI elements in new ways to solve problems.”
There is no questioning AI agents’ importance, as evidenced by vendors ranging from Salesforce and Microsoft to ServiceNow and Snowflake now offering development platforms they claim make it simpler to build useful agents.
In a blog post released last month, Gartner analyst Tom Coshow wrote, “today’s AI models perform tasks such as generating text, but these are ‘prompted’ — the AI isn’t acting by itself. That is about to change with agentic AI (AI with agency). By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously.”
While agentic AI is still in early stages, it is, he said, “not too soon to gain an understanding of the technology, determine how to manage risk, and prepare your tech stack.”
Intelligent agents in AI, wrote Coshow, “will change decision making and improve situational awareness in organizations through quicker data analysis and prediction intelligence. While you’re sleeping, agentic AI could look at five of your company’s systems, analyze far more data than you ever could, and decide the necessary actions.”
Other analysts are in agreement. Deloitte, for example, is predicting that a quarter of companies that use generative AI (genAI) will launch agentic AI pilots or proofs of concept in 2025, with that number growing to half of genAI users by 2027. “Some agentic AI applications, in some industries, and for some use cases, could see actual adoption into existing workflows in 2025, especially by the back half of the year,” said Deloitte staff in a recent report.
Uber branches out into AI data labeling 28 Nov 2024, 5:18 pm
Uber no longer offers just rides and deliveries: It’s created a new division hiring out gig workers to help enterprises with some of their AI model development work.
Scaled Solutions grew out of the company’s own needs for data annotation, testing, and localization, and is now ready to offer those services to enterprises in retail, automotive and autonomous vehicles, social media, consumer apps, generative AI, manufacturing, and customer support. Its first customers include Aurora Innovation, which makes self-driving software for commercial trucks, and game developer Niantic, which is building a 3D map of the world.
The skills Uber requires its new gig workers to have vary, Chris Brummitt, the company’s senior director for communications in Asia-Pacific, said via email. “It depends on the task, but some might need language skills, some programming, some no specific skills.”
Copilot Studio Agents get a major upgrade via Microsoft 365 Copilot 28 Nov 2024, 10:00 am
Microsoft Ignite 2024 put an end to Microsoft’s dalliance with chatbots. Not because the company is walking away from natural language interfaces, but because it’s taking what it learned with its first Copilots and refocusing on tangible productivity gains from large language models (LLMs).
This is not surprising. Although LLM-powered chatbots are an amusing novelty with some useful text-generation features, Microsoft is delivering new ways of working with them, first grounding them in enterprise content with its Microsoft 365 Copilot and then adding tools for working with OpenAPI-powered services in its foundational AI orchestration tool, Semantic Kernel.
Much of Microsoft’s work during the past year or so has led to tools focused on what we’re calling “agentic AI.” Building on the 30-plus-year-old concept of autonomous agents, agentic AI mixes several different LLM-based development techniques to deliver self-assembling process automations that convert a natural language query into a long transaction that works across a set of well-defined service endpoints.
Build low-code agents with Copilot Studio
New tools such as Azure AI Foundry are driving some of the shift to agentic AI. Microsoft is putting the same tools in the hands of its low- and no-code developers in Copilot Studio. The intention is to kickstart agent development, not from traditional developers, but from business users who are at the heart of complex business processes and are looking for tools to simplify things.
Some of the Copilot Studio features detailed at Ignite 2024 had been announced at an event in London a month or so earlier. Ignite saw Microsoft put things in context, adding important details and showing how low-code development would integrate with more complex AI applications built with Azure AI Foundry.
Bringing custom and prepackaged AI together this way fits with the fusion teams model used to link Power Platform applications with the rest of your development process. It allows you to use Azure to build and tune custom multimodal AI models, which can then be quickly included in users’ own applications (with appropriate guardrails and API management rules).
Bringing Power Platform into the full developer ecosystem
Power Platform has stood apart from the rest of Microsoft’s stable of developer tools. The separation was practical; it sold to a different market and Microsoft wanted to avoid confusion. The logical replacement for tools like the classic Video Basic and Silverlight, it’s developed its own Excel Functions-derived programming language, with its connectors and actions the only link to the wider Microsoft ecosystem. That model is changing, and Copilot Studio’s rapidly developing agentic AI platform is leading the way.
Part of the change comes from Microsoft’s understanding that process automation requires significant input from the business alongside technology expertise. Agentic AI will require both programming and process-modeling skills to build out a pipeline of AI assistants supporting a long workflow. Mixing and matching AI agents in a pipeline lets you use the right models for the right piece of the workflow, for example, an OCR model feeds a tool that uses a Microsoft 365 Copilot call to schedule the appropriate field maintenance engineer for a specific component failure.
Using Copilot Studio with the Microsoft 365 Agent SDK
One of the more interesting new features announced at Ignite was a set of tools for building Microsoft 365-based Agents, allowing access to Copilot Studio applications from your C# code (with Python and Node.js support on the project road map). This new SDK bridges not only Azure AI Foundry and Copilot Studio but also lets you work with third-party AI platforms.
The Microsoft 365 Agent SDK has four key components. It offers multiple user interfaces, from the Microsoft 365 Copilot using tools like Copilot Pages, to Teams and even Slack. Grounding data comes from services that include the Microsoft Graph, Azure Fabric, and Azure AI Search. Everything is orchestrated from Semantic Kernel, with agent support from both Copilot Studio and Aure AI Foundry.
Integration between Copilot Studio and the Microsoft 365 Agent SDK is two-way: You can use Semantic Kernel to add memory and skills to a Copilot Agent, adding additional features, or your C# code can call an existing Copilot Agent without needing to build API integration thanks to the Power Platform’s library of connectors.
Using the two tools together extends your reach, bringing Copilot Agents to more than 15 channels, from chat and email to SMS and Teams. You’ll probably be familiar with most of them if you’ve used the original Bot Framework for building basic chatbots. The Microsoft 365 Agent Framework is its successor (in much the same way Copilot Studio builds on the original Power Virtual Agents service).
A flexible agent future
This mix of low code and pro code should give both platforms flexibility, allowing you to mix and match the features you need to build and deliver the appropriate agent-powered workflow for your specific business process needs. Where it used to be hard to go from a Power Platform prototype to a fully managed application, this approach will allow business teams to find problems and experiment with agentic workflow solutions, which can then be included in a larger business process automation project.
There’s more to building effective agents than code: You need good data sources as well as a variety of tunable models. The latest update to Copilot Studio uses the Azure AI Search vector indexing tool to provide data sources for tools using retrieval-augmented generation (RAG), with additional tools to support tuning responses to reduce the risks that come from LLMs generating incorrect responses. You will be able to use the Copilot Control System to provide appropriate policies for managing your services, including adding access controls to reduce the risk of sensitive data leaking through agent operations.
It will be interesting to see how tools like these step outside traditional workflows, having the next generation of AI models directly control user PCs and automating workflows that extend from the data center to desktops. Although Microsoft 365’s GraphQL APIs let today’s agentic AIs work with familiar productivity applications, there are still many enterprise applications that have bespoke user interfaces built using tools like Visual Basic. These are currently inaccessible from most AI tools.
What’s next? Agents on the desktop
It’s not hard to imagine using a desktop workflow automation tool like the bundled Power Automate to identify desktop application screens and controls that are necessary to complete a hybrid desktop/cloud workflow. Once you have them labeled, a next-generation LLM can respond to events in an agent workflow by controlling those applications, automating user interactions that previously would have added latency to a business process.
Applying agents to applications with no API beyond the Windows desktop is an intriguing prospect, especially if interactions can be carried out using Azure-hosted cloud PCs rather than interfering with a user’s normal tasks. Once you start to think about how agentic AI puts the robot into robotic process automation, services like Windows 365 start to make a lot more sense as part of a suite of tools that enable widescale deployment of these tools, even with applications written decades before we had access to the first LLMs.
Microsoft has chosen to focus much of its agentic AI tools on business process automation to manage long workflow-based transactions, with an underlying orchestration engine to manage context. This requires collaboration between software developers and users. By providing different ways to mix low-code tools with traditional development platforms, users have the ability to choose the approach that’s right for them, their experience with AI, and their business needs.
As always, it’s a pragmatic way of delivering a new technology. It can change quickly as new tools and models arrive, which they inevitably will—and faster than you expect.
How to use ref structs in C# 13 28 Nov 2024, 10:00 am
In the C# programming language, structs or structure types are value types that allow faster access because they are usually stored in the stack. However, although structs reduce memory footprints and eliminate garbage collection overheads, they are not a good choice in high-performance scenarios where memory allocation and deallocation in the stack is critical.
In such scenerios, C# provides a better alternative called a ref struct. Whereas both structs and ref structs are allocated in the stack and don’t require garbage collection, there are subtle differences between the two types and their use cases.
In this article we’ll examine ref structs, their features and benefits, and how we can use them in C#. To work with the code examples provided in this article, you should have Visual Studio 2022 Preview installed in your system. If you don’t already have a copy, you can download Visual Studio 2022 here.
Create a console application project in Visual Studio 2022
First off, let’s create a .NET Core 9 console application project in Visual Studio 2022. Assuming you have Visual Studio 2022 installed, follow the steps outlined below to create a new .NET Core 9 console application project.
- Launch the Visual Studio IDE.
- Click on “Create new project.”
- In the “Create new project” window, select “Console App (.NET Core)” from the list of templates displayed.
- Click Next.
- In the “Configure your new project” window, specify the name and location for the new project.
- Click Next.
- In the “Additional information” window shown next, choose “.NET 9.0 (Standard Term Support)” as the framework version you would like to use.
- Click Create.
We’ll use this .NET 9 console application project to work with ref structs in the subsequent sections of this article.
What is a ref struct?
In the C# programming language, a ref struct is a stack-allocated value type. In contrast to regular structs, you cannot box or unbox ref struct instances. Because ref structs remain in the stack throughout their life cycle, they are a great choice for high-performance applications where resource consumption is critical. The ref struct keyword was first introduced in C# 7.2. However, C# 13 extends their support in a very useful way. With C# 13, we can now use local variables of the ref or ref struct type in iterator and asynchronous methods.
Benefits of using ref structs
The ref struct type has two main benefits — enhanced performance and a deterministic lifetime. Let’s first discuss performance.
Because ref structs reside on the stack only, instances of ref struct are allocated and deallocated much faster than instances of classes that reside on the managed heap. Because there is no garbage collection overhead, ref structs enable much faster data access as well.
Let us understand this with a code example. The following piece of code demonstrates how you create a ref struct.
public ref struct MyRefStruct
{
//Omitted for brevity
}
The following code snippet shows how you can use a ref struct with a Span in C#. Note how the parameter of type Span in the constructor of the ref struct is assigned to the private Span instance of the class.
public ref struct MyRefStruct
{
private Span myPrivateSpan;
public MyRefStruct(Span mySpan)
{
myPrivateSpan = mySpan;
}
public int this[int index]
{
get => myPrivateSpan[index];
set => myPrivateSpan[index] = value;
}
}
Because a ref struct instance doesn’t require any heap allocation or boxing overheads when assigned to any other object, memory overheads are reduced and data access is accelerated. You can use BenchmarkDotNet to collect performance metrics when using ref struct instances in your application.
The deterministic lifetime of ref structs is also an important benefit. Because instances of type ref struct are always allocated on the stack, they will be automatically deallocated from the stack memory when your program no longer requires them. Hence, instances of ref struct are a good choice for creating temporary objects.
The following code snippet illustrates how we can create an instance of a ref struct inside a method.
public class MyClass
{
public void MyMethod()
{
MyRefStruct myRefStruct = new MyRefStruct();
}
}
In this example, as soon as the control moves out of the scope in which the ref struct instance has been created, the instance will be destroyed and the memory it occupied in the stack will be deallocated. So, in this example, we can say that the lifetime of the ref struct instance is restricted to the method scope.
Limitations of a ref struct in C#
With C# 13, a few longstanding limitations of the ref struct type have been removed. With C# 13, a ref struct instance now can be used in iterators and asynchronous methods. In addition, a ref struct in C# 13 can implement interfaces and can be used as a type argument.
Nevertheless, a ref struct in C# still has many limitations. You should keep the following key limitations in mind:
- An instance of type ref struct cannot be an element of an array.
- A ref struct cannot be boxed to either System.Object or System.ValueType.
- A ref struct cannot be a member of a struct.
- You cannot use a ref struct as a generic argument when calling methods.
- You cannot capture an instance of a ref struct in a local function or a lambda expression.
- You cannot convert an instance of type ref struct to an instance of the type of the interface it implements.
- A ref struct must implement all members of an interface it implements (even if the members of the interface have been implemented by default in the interface).
Consider the following code that shows how a ref struct implements an interface.
public interface MyInterface
{
public void MyInterfaceMethod()
{
Console.WriteLine("Inside MyMethod");
}
}
public ref struct MyRefStruct: MyInterface
{
public void MyRefStructMethod()
{
Console.WriteLine("Inside MyRefStructMethod");
}
}
Note that because we have not implemented the interface method inside the ref struct, the compiler will generate an error, as shown in Figure 1.
IDG
Use the dispose pattern with ref structs
You can implement the dispose pattern using a ref struct. To do this, you should define a Dispose method inside the ref struct that has a void return type and doesn’t accept any parameters. Because C# 13 allows you to define a ref struct that implements an interface, you can implement the dispose pattern by defining a ref struct that implements the IDisposable interface. The following code snippet shows how you can do this.
public ref struct MyDisposableRefStruct
{
public void Dispose()
{
//Omitted for brevity
}
}
Besides reducing memory consumption and eliminating garbage collection overheads, another significant benefit of a ref struct is that you can use it with pointers in a safe context. A ref struct is a great choice in applications where you need stack-only allocation and high performance. However, you should be careful to avoid overusing ref structs in your application. For example, if you’re using ref structs in recursive methods while having low memory in the stack, you may encounter performance issues.
Uno Platform unveils visual designer for cross-platform .NET development 28 Nov 2024, 12:06 am
Uno Platform has unveiled the Uno Platform Studio, featuring a suite of developer productivity tools including Hot Design, a visual designer for building cross-platform .NET apps.
Introduced November 14, the Uno Platform Studio is intended to help developers navigate parts of the development life cycle, from design handoff to building the app. Hot Design, one of several tools in the suite, lets developers transform a running app into a design from any IDE or any OS. Another key feature of Uno Platform is a Hot Reload capability, which updates the app with new code during development to confirm that changes were applied. Also highlighted is a Figma plugin for design-to-code functionality. Figma and Hot Reload are available now while Hot Design is expected in the first half of 2025. Developers can join a waitlist for early access.
With Hot Design, any change made in the designer updates the XAML code in real time. Further, any code changes are immediately reflected in the UI, allowing for a consistent design-to-code experience and streamlining the development process, Uno Platform said. Hot Design is IDE-agnostic and integrates with the Visual Studio IDE, the Visual Studio Code editor, and the Rider IDE on all operating systems. Developers with Hot Design can develop applications with a rich UI on every platform supported by .NET, the company said. That includes Windows, Mac, and Linux desktops, embedded devices, iOS and Android mobile devices, and the web (via WebAssembly).
Hot Design supports user-created and third-party UI components, simplifying customization and integration. This feature is crucial for developers working with sophisticated design systems and controls, the company said. Developers also can work with real data via Hot Design. Traditional designers have required creating mock data, Uno Platform said.
Hot Reload lets developers see UI changes without having to rebuild the app. Hot Reload works with Hot Design and provides a visual indicator to help users monitor further changes during development. The Figma plugin can export Figma designs to XAML or C# Markup with one click, allowing the resulting markup to be easily pasted into Uno Platform Studio and Hot Design.
AWS re:Invent 2024: The future of cloud computing (and where AWS fits in it) 27 Nov 2024, 4:37 pm
AWS re:Invent, one the premier events in the cloud industry, serves as a platform for Amazon Web Services (AWS) to introduce new services, capabilities, and enhancements to its extensive portfolio. The 2024 events kicks off on Monday, December 2, with a live event in Las Vegas and a virtual event as well.
As one of the leading hyperscalers, AWS claims to continuously push the boundaries of cloud features and scalability. Expect announcements that impact cloud computing’s role in your businesses. And also watch for signs that the cloud giant may be changing direction or losing ground to competitors like Microsoft Azure and Google Cloud.
AWS re:Invent also strives to provides insights into the latest trends and innovations within the cloud industry — topics such as machine learning, artificial intelligence, serverless computing, and IoT — which is designed to allow attendees to gain an understanding of emerging technologies and their potential applications.
Follow this page for the latest news and insights from AWS re:Invent 2024, and check out recent related coverage below.
Related AWS news and analysis
AWS closes several cloud services to new customers
August 8. 2024: AWS has closed new customer access to several of its cloud services including the AWS Cloud9 cloud-based IDE and the AWS CodeCommit private Git repository service. The closings were posted in late July, and were effective immediately. Other AWS services closed to new customers include the Amazon S3 Select, Amazon S3 Glacier Select, Amazon CloudSearch, Amazon Forecast, and AWS Data Pipeline.
AWS offers new AI certifications
June 4, 2024: With a shortage of IT workers with AI skills looming, AWS is offering two new certifications to help enterprises building AI applications on its platform to find the necessary talent. One of the certifications, AWS Certified AI Practitioner, is a foundational-level certification to help workers from a variety of backgrounds to demonstrate that they understand AI and generative AI concepts, can recognize opportunities that benefit from AI, and know how to use AI tools responsibly.
IBM and AWS forge global alliance, streamlining access to AI and hybrid cloud solutions
May 3, 2024: IBM expanded its software portfolio to 92 countries in AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors (ISVs). Previously, the digital catalog was available in just five countries. This strategic alliance allows businesses in these countries access to IBM’s software products, including data technologies and AI, directly through the AWS Marketplace.
AWS moves Amazon Bedrock’s AI guardrails, and other features to general availability
April 23, 2024: AWS is moving some features of its generative AI application-building service, Amazon Bedrock, to general availability. These features include guardrails for AI, a model evaluation tool, and new large language models (LLMs).
Amazon downsizes AWS in a fresh cost-cutting round
April 4, 2024: In a fresh round of layoffs, Amazon is laying off hundreds in the company’s cloud division, Amazon Web Services (AWS). The layoffs will affect AWS’ sales, marketing, and brick-and-mortar technology development teams. This move comes as the company seeks to streamline its operations and focus on areas that deliver a better return on investment. “
Here’s what AWS revealed about its generative AI strategy at re:Invent 2023
December 1, 2023: At AWS’ annual re:Invent conference, CEO Adam Selipsky and other top executives announced new services and updates to attract burgeoning enterprise interest in generative AI systems and take on rivals including Microsoft, Oracle, Google, and IBM.
AWS releases Cloud Institute virtual program to train developers
October 10, 2023: In a move to address the scarcity of trained cloud developers, AWS is releasing a new virtual program, dubbed Cloud Institute.AWS Cloud Institute, which is a one-year program with 12 different courses spread across four quarters, has been designed by AWS experts with inputs from customer enterprises and partners, such as Accenture, AWS said. The courses include hands-on, game-based learning, and exercises such as building applications.
Python to C: What’s new in Cython 3.1 27 Nov 2024, 10:00 am
Cython is the most convenient way to bridge C and Python languages and tools in Python programs. Developers use it to write C extensions or wrap C libraries for use in Python. Now, a major new release is on the way. While it’s in early stages, Cython 3.1 adds powerful new type annotation features, and it includes compatibility with the free-threaded or “no-GIL” build of Python. It’s worth a spin, especially for developers who like to stay ahead of the curve.
Support for free-threaded Python
Python 3.13 introduced the first public, if experimental, “free-threaded” or “no-GIL” builds of the language, which we’ll call “3.13t.”
3.13t allows CPU-bound Python threads to run with true parallelism. As a project, it’s still in the early stages of development, so it’s only offered as an opt-in option. You have to specifically install the free-threaded build in order to use it.
For a long time, Cython, and C modules for Python generally, were a way to sidestep the GIL, the global interpreter lock. C code that doesn’t call into the CPython runtime isn’t constrained by the GIL. With 3.13t, conventional Python programs can now enjoy the same freedom.
Cython 3.1 adds basic support for 3.13t. This support is considered experimental, just as free-threading itself is experimental, so you shouldn’t rely on it for production use.
Cython modules need special build instructions to work with 3.31t. If a free-threaded Python tries to load a non-free-threaded-compatible Cython module, the interpreter will drop back to GIL mode for compatibility.
How to build free-threaded Cython modules
There are a few steps to building free-threaded Cython modules.
1. Install the pre-release version of Cython on 3.13t
As of this writing, Cython 3.1 is not yet available on PyPI, so you’ll have to install it from GitHub. To minimize problems you might have with building Cython itself on 3.13, you may want to install the uncompiled version of Cython.
First, set the environment variable NO_CYTHON_COMPILE
to the string "true"
. Then install Cython from GitHub:
pip install git+https://github.com/cython/cython
2. Add a compiler directive to Cython modules
To mark a module as being compatible with free-threading, you will need to add this declaration at the top of the module:
# cython: freethreading_compatible = True
Note that this declaration doesn’t by itself cause the module to be compiled for free-threading. It’s just a way to inform the Python runtime that it can stay in free-threaded mode when the module is imported.
3. Add the free-threading build macros
The setup.py
file used to build Cython extensions needs to set a C macro to allow your Cython modules to be built with free-threaded compatibility. The macro Py_GIL_DISABLED
(note that’s Py
, not PY
!) should be set to 1
.
One convenient way to do this automatically is to use the sys._is_gil_enabled()
method to check if the macro needs to be used. A recipe like this might work:
macros = []
if getattr(sys, "_is_gil_enabled", None) and not sys._is_gil_enabled():
macros = [("Py_GIL_DISABLED","1")]
ext_modules = [
Extension(
"compute",
["compute.pyx"],
define_macros=macros
)
]
4. Ensure your code is thread safe
Thread safety in Cython modules has always been the programmer’s responsibility, at least as far as pure C code goes. If you call into the CPython interpreter using Cython, even in the free-threaded build, the interpreter will handle reference counts for those objects (although it won’t do anything about data races). But any Cython functions that run concurrently must not manipulate the same data.
New type annotations for constants, pointers, and volatile values
As Cython has evolved, it has gradually transitioned away from its custom Python-but-not-quite syntax. Modern Cython uses a syntax that is essentially regular Python with Cython-added type hints, context managers, and other constructions.
That new syntax lacked some of the precision and expressiveness of “classic” Cython, however, so it was necessary to drop back to the older Cython syntax to express certain things. Cython 3.1 adds new type hints to help close that gap:
cython.const[T]
lets you declare something of a given type as a constant. For instance,x:cython.const[cython.int] = 32
would declarex
as the constant integer 32. You can also abbreviate many common constant declarations; e.g.,cython.const[cython.int]
can be simplified tocython.const_int
. Note that these declarations are only valid at the module level; they cannot be made within a function.cython.volatile[T]
creates a declaration that is handled by the compiler in the same manner as something that uses thevolatile
C keyword.volatile
is used to indicate that the compiler should not optimize the code in such a way that the value in question can ever be assumed to be any particular thing at any particular time. This is useful for, say, C modules that interface directly with hardware.cython.pointer[T]
lets you define a pointer type. For instance,x: cython.pointer[cython.int] = cython.address(y)
would makex
into a pointer toy
. Dereferencing a pointer in Cython is still done with the old-school C-style syntax, though:z=x[0]
would dereferencex
intoz
(and thus setz
to the value ofy
).
A software developer gives thanks 27 Nov 2024, 10:00 am
Tomorrow, we in the US celebrate Thanksgiving. So today seemed like a good time for me to recognize the people and technologies in the software development world that I’m most thankful for.
I’m thankful for Alan Turing and computer programming
I don’t think many folks realize the impact that Alan Turing had on the world. He conceived of a general-purpose computer—a machine that could do any kind of calculation based on varying algorithms and inputs. He dreamed up computer programming. That the world treated him so badly after he helped win World War II by cracking the Nazi code is to our eternal shame. Thank you, Dr. Turing.
I’m thankful for Anders Hejlsberg and Turbo Pascal, Delphi, C#, and TypeScript
Though I cut my teeth on BASIC, it was Turbo Pascal 5.5 that led me to a career as a software developer. Anders Hejlsberg built Turbo Pascal himself, blazing a trail in software development tools when he joined forces with Phillipe Kahn to sell a product (a Pascal compiler and IDE) that anyone could use and afford. Hjelsberg went on to create Delphi, C#, and TypeScript. Not a bad résumé. Thank you, Anders.
I’m thankful for Bill Gates, Steve Jobs, and Steve Wozniak and the personal computer
It’s hard to say who was most responsible for ushering in the personal computer age, but Bill Gates, Steve Jobs, and Steve Wozniak all played a major role. All three created products and companies that spawned countless businesses and products that dominate our world today. While I am not a big fan of Jobs, I have to admit that he led not one but two revolutions—the personal computer and the smartphone. Thank you, gentlemen.
I’m thankful for Tim Berners-Lee and HTTP/S
I don’t think many people realize how profoundly simple the World Wide Web is. One computer says “Hey, I have a text request” and another computer responds “Okay, here’s some text back.” Sure, that is an oversimplification, but not by much. HTTP is the foundation of 95% of what happens on the internet, and I’m grateful for everything that it has enabled from its humble beginnings. Thank you, Professor Berners-Lee.
I’m thankful for object-oriented programming
I’ll never forget the moment when things clicked in my brain and I finally understood the concept of object-oriented programming (OOP). It was fantastic. OOP has come under fire a bit in recent years, but it is the foundation for pretty much everything we do in code these days. It’s a beautiful model for thinking about and designing software. Thank you, Ole-Johan Dahl, Kristen Nygaard, Alan Kay, and many others.
I’m thankful for Visual Studio Code
An IDE that everyone uses for pretty much all languages and development? Freely available and actively updated? Thousands of cool extensions? Cross-platform? What’s not to like about Visual Studio Code? An added bonus—it is written in my favorite language, Typescript. Thank you, Microsoft.
I’m thankful for GPS
It is hard for kids these days to conceive of having to call and get directions before heading out to some previously unknown destination. Does anyone else remember pulling over at a “filling station” to ask for directions? The Global Positioning System (GPS) enables all kinds of mapping and navigation applications (delivery, ride sharing, etc.) we never would have conceived without it. Thank you, American taxpayers.
I’m thankful for dependency injection
If I could go back 25 years and take with me only one development principle to tell developers about, it would be dependency injection. What a magnificent, beautiful thing it is. I loved it so much, I wrote a whole book on it. If you aren’t using dependency injection to decouple your code, you need to start today. Thank you, Martin Fowler and Uncle Bob Martin.
I’m thankful for REST
Getting one computer to do work for another computer was long the Holy Grail of programmers. We’ve tried all kinds of technologies — gRPC, CORBA, DCOM. It took a special kind of genius to notice that the four basic HTTP actions corresponded to a database’s CRUD (create, read, update, delete) system. Brilliant. Thank you, Roy Fielding.
I’m thankful for my Dad and his IBM 5100
Finally, on a highly personal note… My dad was a cardiologist and a very early member of the International Society for Computerized Electrocardiology. His hospital had one of the first “portable computers,” an IBM 5100. (“Luggable” would be a better way to describe it.) He brought it home on the weekends for me to use. I saved up and bought my very own data storage tape, and converted a very early video game — a text-based version of Star Trek — to run on it. Getting the game working on that little screen was one of the thrills of my life. Modding it to get the photon torpedoes to move across the screen was a milestone moment and set me on my way. Thank you, Dad.
Kotlin for Java developers: Concurrency with coroutines 27 Nov 2024, 10:00 am
The ability to perform multiple operations simultaneously is one of the most important aspects of any programming language. Orchestrating multiple paths of execution is inherently complex and there are various approaches to taming this complexity. Kotlin is a JVM language growing in popularity for its clean union of functional and object-oriented paradigms. In my previous article, I introduced Kotlin’s coroutines as a powerful feature for handling concurrency. Now we’ll take a closer look at writing concurrent programs using coroutines.
Understanding Kotlin’s coroutines
Like concurrency in general, it’s easy to understand the basics of Kotlin’s coroutines but you have to keep a firm footing and move gradually: it can get complex fast. Kotlin’s coroutines are found in the kotlinx.coroutines package, which covers a wide terrain; everything from simple blocking subroutines to sophisticated reactive pipelines.
Coroutines are an abstraction over threads, similar to virtual threads in Java. Rather than controlling threads, with coroutines you are usually managing a platform-level object that the platform manages for you. The result is the potential for better performance—sometimes radically better—compared with using traditional threads. In essence, you declare the parts of your code that are “suspendable” and the engine determines how best to orchestrate them under the hood.
Syntactically, the idea behind coroutines is to provide a way to manage concurrency with synchronous-looking code blocks. This is done by using coroutine scopes. Spawning coroutines is only allowed within a scope. The scope defines the behavior of the routines within it, and all the coroutines launched within a scope return to that scope, regardless of how they conclude (e.g., even if they error out).
Declare a blocking scope
The most basic kind of scope is a blocking scope, which is obtained using the runBlocking
function. This kind of scope tells the platform to block the current thread until all the coroutines within it are finished. It’s usually used at the top level of an application, in the main function, to ensure the program doesn’t complete before its subroutines are done with their work. (But note that it isn’t necessary to use runBlocking in Android because that platform uses an event loop.)
Here’s a common way to declare a blocking scope in your code:
import kotlinx.coroutines.*
fun main() = runBlocking {
// do concurrent work
}
This syntax is an interesting part of Kotlin. It says to execute main
immediately and provides the runBlocking
function (from the kotlinx.coroutines
library) as the implementation. It passes in the code block defined as the argument (known as a trailing lambda). Whatever is defined in the body of the curly braces will be executed in the blocking thread scope created by runBlocking
.
Run a task with the launch function
Now let’s say we want to run a task inside our blocking main scope. A common approach is to use the launch function found in kotlinx.coroutines
:
import kotlinx.coroutines.*
fun main() = runBlocking {
println("start main")
launch {
println("start launch 1")
delay(1000)
println("end launch 1")
}
//println("between launches (main)")
println("end main")
}
Running this produces the following output:
start main
end main
start launch 1
end launch 1
This output shows us that the main
function executes to the end, then waits while the launch block is run concurrently. It waits for a second and then completes. This is an example of the delay function, which lets us wait for a specified number of milliseconds.
Now imagine that instead of a delay, we are executing a long-running network request, disk operation, or calculation. In this case, we could use a coroutines dispatcher. Dispatchers are used to fine-tune how concurrent jobs are handled. The default dispatcher works well to start.
Dispatchers and delays
Let’s expand our example to spawn two different jobs:
fun main() = runBlocking {
println("start main")
launch {
println("start launch 1")
delay(1000)
println("end launch 1")
}
launch {
println("start launch 2")
delay(500)
println("end launch 2")
}
println("end main")
}
Now we have two tasks launched, and running the above code produces output like so:
start main
end main
start launch 1
start launch 2
end launch 2
end launch 1
As you would expect, the faster job (with a delay of 500 milliseconds) completes before the slower one (1,000 milliseconds). Both complete after the main function is done.
Now let’s extract our functionality to a function. Say we want to have a function that accepts the milliseconds to wait (in real life this could be an API endpoint to retrieve) and we’ll call it twice with two different parameters (500 and 1000) to reproduce our previous example in a more flexible format. Using coroutines, the simplest possible way to do this is:
import kotlinx.coroutines.*
suspend fun launchTask(delayMillis: Long) {
println("START task $delayMillis")
delay(delayMillis)
println("END launchTask $delayMillis")
}
fun main() = runBlocking {
println("start main")
launch {
launchTask(1000)
}
launch {
launchTask(500)
}
println("end main")
}
Running the above code produces the same output as the last example, but now we have a reusable launchTask
function. Notice launchTask
is prefixed with the suspend keyword. Without that, the engine will not understand that the function supports being “suspended”—meaning it is paused to allow other work to occur—and the compiler will reject the delay.
Notice also that we had to use two launch blocks. If we used one with both launchTasks
calls, the calls would happen sequentially.
Contexts and cancelations
Now let’s incrementally increase the sophistication of our concurrent logic. If we wanted to support the ability to cancel the two tasks, we could tie them together into a context, and use the cancel()
method:
import kotlinx.coroutines.*
suspend fun launchTask(delayMillis: Long) {
println("START task $delayMillis")
delay(delayMillis)
println("END launchTask $delayMillis")
}
fun main() = runBlocking {
println("start main")
val scope = CoroutineScope(Dispatchers.Default)
scope.launch {
launchTask(10000)
}
scope.launch {
launchTask(500)
}
// Cancel all coroutines in the scope after 2 seconds
delay(2000)
scope.cancel()
println("end main")
}
Here we explicitly create a CoroutineScope
and use it to launch our two suspended function calls, again using the default dispatcher. With the handle to the scope, we can start our jobs and then cancel them with scope.cancel()
. Notice that we have two tasks, one with a delay of 10,000 milliseconds. Because we cancel after 2,000 milliseconds, we get the following output:
start main
START task 500
START task 10000
END launchTask 500
end main
So, the 10,000-millisecond task was started but never completed. Instead, it was canceled along with its enclosing scope.
For another degree of sophistication, we can add a withTimeout
block:
fun main() = runBlocking {
println("start main")
withTimeout(5000) {
launch {
launchTask(10000)
}
launch {
launchTask(500)
}
}
println("end main")
}
This block behaves similarly to the previous example in that it cuts short the 10,000-millisecond job. But in this case an exception will be thrown. Here’s how to handle the exception gracefully:
try {
withTimeout(5000) {
launch {
launchTask(10000)
}
launch {
launchTask(500)
}
}
} catch (e: TimeoutCancellationException) {
println("Timeout occurred: ${e.message}")
}
We’ll get the following clean output:
start main
START task 10000
START task 500
END launchTask 500
Timeout occurred: Timed out waiting for 5000 ms
Now imagine we have our two jobs and one of them will do network calls, which is considered an IO-bound operation. We can pass in a specific dispatcher to use:
launch(Dispatchers.IO) {
launchTask(10000)
}
We can also cancel jobs individually:
val scope = CoroutineScope(Dispatchers.Default)
scope.launch {
launchTask(10000)
}
val job = scope.launch {
launchTask(500)
}
job.cancel() // cancel the specific job
delay(2000)
scope.cancel()
We’d get the following output:
start main
START task 10000
end main
This works because the delay()
function can be canceled. In more complex scenarios, you’d have to implement the cancelation support yourself. A the Kotlin documentation notes, cancellation is cooperative, so you have to ensure your code is cancelable. This can be done using the isActive
property:
val job = launch(Dispatchers.Default) {
var nextPrintTime = startTime
var i = 0
while (isActive) { // cancellable computation loop
// print a message twice a second
if (System.currentTimeMillis() >= nextPrintTime) {
println("job: I'm sleeping ${i++} ...")
nextPrintTime += 500L
}
}
}
When a cancel
call is received, the isActive
property resolves to false
and the while loop allows the job to exit.
Communication with channels
Coroutines support channels as a clean way to pass data between running coroutines. Here’s an example:
import kotlinx.coroutines.*
import kotlinx.coroutines.channels.Channel
fun main() = runBlocking {
val channel = Channel() // A communication channel for Ints
val producer = launch {
repeat(5) { i ->
delay(1000)
channel.send(i)
}
}
val consumer = launch {
repeat(5) {
val message = channel.receive()
println("Received: $message")
}
}
}
This allows the producer job to send messages over the channel object and the consumer to listen for them, printing them out as they are received. This is a very simple mechanism for sharing data between concurrent contexts.
Reactive programming with flows
Now let’s ratchet up the sophistication another notch using flows. These are functional event streams that give you a kind of reactive programming framework. Here’s a simple example:
import kotlinx.coroutines.*
import kotlin.random.*
import kotlinx.coroutines.flow.*
fun randomNumbers(count: Int): Flow = flow {
for (i in 1..count) {
emit(Random.nextInt()) // Emit a random integer
delay(500) // Simulate some work
}
}
fun main() = runBlocking {
randomNumbers(5)
.collect { value -> println("Received: $value") }
}
This code creates a function called randomNumbers
that returns a flow of Ints. We call it and its body uses the emit()
function to return its value. This is like a stream in Java, and the .collect()
call is the termination. This lets us create composable, reactive pipelines out of flows. This programming model yields an immense amount of flexibility and power.
For example, if we wanted to add another step in the stream by doubling the numbers, we could create another functional operator:
fun doubledNumbers(numbers: Flow): Flow = numbers.map { it * 2 }
and add it to our pipeline:
fun main() = runBlocking {
val randomFlow = randomNumbers(5)
val doubledFlow = doubledNumbers(randomFlow)
evenOddFlow.collect { value -> println(value)}
}
Conclusion
This has been a quick cruise through some of the most interesting parts of Kotlin’s concurrency model using coroutines. The foundation is easy to understand and you can layer many more complex and higher-level capabilities on top of it. It’s notable that Kotlin only includes simple concurrent primitives in the language itself, then uses mainly functions from kotlinx.coroutines
and its subpackages to deliver the remaining functionality. This keeps things more flexible and amenable to elaboration by applications and library code.
Overall, Kotlin’s concurrency support is impressive and well thought out. It’s not hard to understand why the language is a popular alternative to Java.
Kotlin to lose scripting features 27 Nov 2024, 3:51 am
JetBrains plans to drop some scripting-related technologies from the Kotlin language, including REPL functionality, an obsolete API for Java scripting, the KotlinScriptMojo Maven plugin, and the kotlin-scripting-ide-services library for implementing code completion functionality.
In a November 19 blog post, JetBrains said it would drop these scripting technologies from Kotlin because its research showed that user needs were covered by other technologies already being supported. The scripting-related components and libraries to be removed from the Kotlin compiler and IntelliJ IDEA IDE include:
- Default REPL implementations in the Kotlin compiler and IntelliJ IDEA plugin will be sunsetted. The functionality is limited and improving it has not been a priority, JetBrains said. The company will continue to promote the Kotlin Notebook plugin and IDE Scratch files as solutions for interactive Kotlin development. Plans call for keeping some REPL-related functionality in the compiler and the Custom Scripting API. But with a final switch to the K2 compiler, a significant portion of this functionality will be dropped or changed.
- Java Specification Request 223, offering scripting for the Java platform. With the original specification having been withdrawn, JetBrains does not believe supporting a de facto obsolete API makes sense. The existing implementation will continue to function until the release of Kotlin 2.3 in the language version 1.9 compatibility mode.
- KotlinScriptMojo, a Maven plugin that supports script execution during a Maven build. JetBrains did not find evidence of enough usage to continue maintaining it, so plans call for dropping it in upcoming Kotlin releases.
- kotlin-scripting-ide-services, a library for implementing code completion functionality, mainly for REPL implementations. It is currently used in projects like Kotlin Interactive. Because it is heavily based on the infrastructure of the pre-K2 compiler it cannot easily be ported to the K2 version. Therefore, it will most likely stop working around the release of Kotlin 2.3 and will be removed from the code base. JetBrains may consider reimplementing similar functionality on top of K2 in the future.
Kotlin scripting remains an essential part of the Kotlin infrastructure, JetBrains said. The company will continue to support it in an experimental state.
Kotlin scripting enables executing Kotlin code as scripts without prior compilation or packaging into executables. While JetBrains will continue to offer generalized support for scripting in Kotlin, which includes compilation and evaluation of basic .kts files, as well as custom scripting, JetBrains is not prepared to recommend Kotlin scripting as a generally applicable scripting solution, or as a replacement for Bash or Python.
Scripting was introduced into Kotlin long ago as an experiment to investigate ways of using the language. Since then, development has been driven by the demand of external and internal adopters and experiments of the Kotlin team, JetBrains said.
Anthropic introduces the Model Context Protocol 26 Nov 2024, 5:57 pm
Anthropic today released a new open source protocol to let all AI systems, not just its own, connect with data sources via a standard interface.
Model Context Protocol (MCP), the company said in its announcement, lets developers build secure two-way connections between AI-powered tools and the data sources they require to do their jobs via a client-server architecture.
“As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality. Yet even the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy systems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale,” Anthropic said. “MCP addresses this challenge. It provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. The result is a simpler, more reliable way to give AI systems access to the data they need.”
Monday’s announcement was threefold: The company introduced the MCP spec and software development kits (SDKs), launched local MCP server support in its Claude Desktop apps, and provided an open source repository of MCP servers, including prebuilt servers for Slack, GitHub, SQL databases, local files, search engines, and other data sources.
Anthropic said that development tool vendors such as Replit and Codeium are adding support for MCP, and Zed, Sourcegraph, Block, and Apollo have already done so.
How MCP works
“The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers,” Anthropic said in its post.
There are three components to an MCP connection:
- Hosts – LLM applications such as Claude Desktop that initiate connections;
- Clients – Systems that maintain 1:1 connections with servers, inside the host application;
- Servers – Systems that provide context, tools, and prompts to clients.
Core components include a protocol layer to handle message framing, request/response linking, and high-level communication patterns and a transport layer to deal with communication between client and server.
At the moment there are two SDKs available: one for TypeScript, and one for Python. Anthropic also provides plenty of documentation on getting started and a GitHub repository of reference implementations and community contributed servers.
Currently, MCP only talks to servers running on a local computer, but Alex Albert, head of Claude relations at Anthropic, said in a post on X that work is in progress to allow for remote servers with enterprise-grade authentication.
“An MCP server shares more than just data as well. In addition to resources (files, docs, data), they can expose tools (API integrations, actions) and prompts (templated interactions),” he added. “Security is built into the protocol — servers control their own resources, there’s no need to share API keys with LLM providers, and there are clear system boundaries.”
Anthropic said that developers can start building and testing MCP connectors today, and existing Claude for Work customers can test MCP servers connecting to internal systems and data sets. And, the company promised, “We’ll soon provide developer toolkits for deploying remote production MCP servers that can serve your entire Claude for Work organization.”
Build generative AI pipelines without the infrastructure headache 26 Nov 2024, 10:00 am
While creating a basic ChatGPT prototype might take a weekend, developing production-ready generative AI systems that securely handle enterprise data presents significantly greater engineering challenges. Development teams typically invest weeks addressing critical infrastructure requirements: securing data pipelines across siloed systems (both unstructured and structured), configuring vector databases, making model selection decisions, and implementing comprehensive security controls—all while maintaining strict compliance standards.
Traditional approaches present a difficult choice. We either invest months in building custom infrastructure from scratch or we accept the limitations of vendor-specific ecosystems that restrict our choice of models, databases, and deployment options.
Gencore AI transforms this landscape. It enables the construction of enterprise-grade generative AI pipelines using any data system, vector database, AI model, and prompt endpoint. Through its flexible architecture and embedded security controls, you can deploy production-ready AI systems in days instead of months.
A highly flexible platform for building enterprise-grade AI systems
Gencore AI is a holistic solution that allows you to easily build safe, enterprise-grade generative AI systems, utilizing proprietary enterprise data securely across diverse data systems and applications. It accelerates generative AI adoption in enterprises by simplifying the construction of unstructured and structured data and AI pipelines from hundreds of data systems. The solution automatically learns data controls (such as entitlements) in underlying systems and applies them at the AI usage layer, protects AI systems against malicious use, and provides full provenance of the entire AI system for comprehensive monitoring and control.
Developers can use Gencore AI’s components either as a complete platform or as modular building blocks in existing projects. Gencore AI allows you to:
- Build safe enterprise AI copilots: Draw on a rich library of connectors and a unique knowledge graph to build enterprise AI copilots, knowledge systems, and apps that combine data from multiple systems. Enterprise controls, like entitlements in data systems, are automatically learned and applied at the AI usage layer. Gain full provenance of the entire AI system, including data and AI usage—down to the level of each file, every user, and all AI models and usage endpoints.
- Safely sync data to vector databases: Quickly and securely ingest and sync unstructured and structured data at scale from any system, including SaaS, IaaS, private clouds, and data lakes and data warehouses. Generate embeddings from data while retaining associated metadata and store them into a chosen vector database, making enterprise data ready for large language models (LLMs) to produce valuable insights.
- Prepare unstructured data for AI model training: Build and manage data preparation pipelines for model training and tuning, with built-in sanitization and quality control capabilities.
- Protect AI interactions: Configure LLM firewalls to protect user prompts, model responses, and data retrievals in AI systems.
Gencore AI pipeline architecture and components
With its flexible architecture, Gencore AI enables developers to easily configure complex AI pipelines and rapidly create, iterate, and deploy enterprise-grade AI systems. Let’s examine the core components and their capabilities:
- The Data Loader forms the foundation of the pipeline, connecting to a wide array of source systems. It implements granular filtering options based on file types, modification dates, and custom criteria. A key feature is its ability to extract and preserve metadata, such as access controls from source systems. The loader also supports incremental loading, efficiently handling large-scale data updates without the need for full reprocessing.
- Next, the Data Sanitizer performs in-memory data obfuscation based on classified data elements. It utilizes advanced pattern recognition techniques and natural language processing (NLP) models for sensitive content classification. The sanitizer offers customizable rules that can be fine-tuned to align with specific industry regulations. Importantly, it provides detailed logging and auditing of sanitization actions, which is crucial for maintaining compliance and enabling forensic analysis if needed.
- The Embeddings Generator captures the semantic meaning of your data into vector representations using selected embedding models. It supports multiple state-of-the-art APIs and hosted models as well as custom models, allowing organizations to choose the best fit for their data and use case. The generator implements efficient splitting strategies to handle long documents, ensuring that context is preserved while optimizing for vector database storage and retrieval.
- The Vector Database stores and indexes these embeddings for efficient retrieval. Gencore AI integrates with popular vector databases and implements optimized indexing strategies for fast similarity search. A standout feature is its support for hybrid search, combining vector similarity, syntactic similarity, and metadata filtering to provide more accurate and contextually relevant results.
- The Embeddings Retriever configures similarity search parameters for optimal result relevance. It offers advanced retrieval methods like hybrid search and LLM assisted re-ranking, providing tunable parameters for controlling the number of results and similarity thresholds. To improve performance for frequent queries, the retriever implements intelligent caching mechanisms.
- The LLM Selection component integrates with chosen LLM providers, supporting both cloud-based and on-premises deployments. It offers sophisticated model comparison tools to evaluate use cases and performance across different LLMs. The selected LLM is used within a powerful agentic workflow framework to maximize response accuracy.
- The Prompt, Retrieval, and Response Firewalls come with pre-configured policy templates for common security and compliance scenarios while also providing for custom rule and policy creation. These firewalls provide real-time policy enforcement with minimal latency impact, ensuring that AI interactions remain secure and compliant.
- Finally, the Assistant API exposes configured pipelines through RESTful APIs, enabling seamless integration with enterprise systems. Users can access these pipelines via multiple options: a web widget for easy integration into web pages or applications, a centralized user portal for dedicated access, or through popular messaging apps like Slack and Teams.
Accelerating enterprise generative AI development
As enterprises continue to explore the vast potential of generative AI, solutions like Gencore AI play a crucial role in bridging the gap between innovation and governance. By providing a comprehensive, safe, and flexible platform for building enterprise-grade AI systems, Gencore AI empowers organizations to move beyond proof-of-concept implementations to deploy scalable generative AI applications across their organization.
The future of enterprise AI development isn’t just about accessing advanced models—it’s about deploying these models efficiently and securely within complex organizational environments. Gencore AI provides developers with the tools and infrastructure needed to build generative AI applications that meet enterprise security, compliance, and scalability requirements, without sacrificing development speed or flexibility.
Mike Rinehart is VP of artificial intelligence and Bharat Patel is head of infrastructure engineering at Securiti.
—
Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.
Are cloud units a good measure of cloud value? 26 Nov 2024, 10:00 am
If we’ve learned anything from the constant evolution of cloud computing and its integration into enterprise operations, it’s that generic metrics are rarely good at measuring success. Don’t get me wrong, cloud units are better than no metrics. Most enterprises use no metrics at all, even if they won’t admit it.
Metrics must be aligned with cloud value, otherwise enterprises could get into trouble. Many of those running finops in Global 2000 enterprises are likely to agree with me. Thus, I felt compelled to write this.
What are cloud units?
The concept of cloud units originated within the cloud finops space. Cloud units are designed to tie cloud costs and resources to a standardized unit of business value, such as cost per user, transaction, or workload. The hope was to make cloud economics relatable and actionable for business leaders. On the surface, this sounds like a great idea. After all, in a world where CFOs are left scratching their heads over wildly fluctuating AWS bills, attaching real-world outputs to cloud expenditures offers clarity—or so we’re told.
In actuality, cloud units fall short in most real-world enterprise scenarios. Why? They reduce complex, dynamic cloud ecosystems into overly simplistic metrics that fail to account for the nuanced priorities, goals, and strategic outcomes unique to each business. What starts as a standard measure to align IT and business more effectively often ends up as a distraction—or worse, the wrong tool for the job.
A better method is to measure cloud value with metrics that adapt to your business’s specific needs and evolve with them. For example, an e-commerce platform might measure the cost of completing a single order. This allows leadership to better reconcile cloud infrastructure expenses with operational outcomes.
Where cloud units get it wrong
The one-size-fits-all approach of cloud units might benefit teams just starting their cloud finops journey, but it rarely holds up at scale or when business contexts grow more complex (like every cloud project I worked on in the past seven years).
Let’s call out what’s painfully obvious: Cloud environments are inherently chaotic. Enterprises run thousands of workloads across hundreds of services, often spanning multiple regions or cloud bands. This complexity makes it nearly impossible to reduce an entire cloud ecosystem to a single unit cost. An overly simplistic model often results in more confusion than clarity.
For instance, say your cloud unit represents cost per transaction. If this cost increases, what does that mean? Are inefficiencies driving the uptick or is it a sign of additional investments in scaling infrastructure during a seasonal surge in demand? Cloud units don’t offer that kind of granularity. Instead, they paint with broad strokes, making it easy to misinterpret valuable strategic spending as waste.
Every enterprise operates differently
Some companies focus on customer experience. Others pour resources into creating innovative products. Many have business models that don’t fit a standard mold. Yet cloud units are, by definition, one size fits all. They assume that all workloads should be mapped to the same primary output: the lowest cost per transaction, cost per gigabyte, or cost per instance hour.
For instance, a healthcare organization trying to calculate the cost of securely storing patient records won’t appreciate being lumped into the same framework as a video streaming service measuring costs per viewer. Likewise, an AI startup training machine learning models won’t find value in cost-per-user metrics since their business drivers are entirely different.
What cloud units miss is the individuality of enterprises and their goals. They force businesses to align with the metric instead of aligning the metric with the company. This makes cloud units ill-suited for enterprises that don’t fit neatly into their constraints.
Here’s where cloud units hit an even more significant snag: Value isn’t always financial. Yes, reducing per-unit costs is essential, but what about strategic gains? Enterprises expect outcomes such as greater agility, innovation, or customer satisfaction from cloud investments. Yet, these benefits are hard to quantify using a standardized metric like cloud units.
Consider an enterprise that invests heavily in real-time analytics. The immediate output might show relatively high costs but also long-term competitive advantages like better decision-making or improvements in customer retention. Take an organization that builds cloud-based disaster recovery systems: The value lies in ensuring resilience, not cutting costs per unit. Metrics must be equipped to reflect these intangible benefits so that companies don’t miss the bigger picture.
The case for bespoke cloud metrics
Now that we’ve dismantled the argument for cloud units, let’s talk about what works better: bespoke metrics, which means metrics for each problem domain. This is precisely what most finops pros attempt to avoid, so I suspect I’ll get a lot of pushback for my opinions.
Instead of forcing every workload or department into a generic mold, bespoke metrics reflect your organization’s needs. A media company might track cost per stream, retail operation, and cost per fulfilled order. These metrics are far more relevant than trying to shoehorn cloud spending into a generalized cloud-unit framework.
Custom metrics can balance tangible costs with intangible benefits. For example: time-to-market improvements associated with cloud deployment, customer satisfaction tied to reduced application latency, or revenue growth tied directly to AI-driven innovation projects.
Where cloud units smooth details into one overarching value, bespoke metrics enable deeper inspection. You can track individual workloads, application teams, or geographic regions, identifying areas for optimization without losing sight of broader trends. This precision empowers data-driven decision-making.
A word of warning
I want to be clear: Using metrics to determine cloud value is essential. My concern is that the finops community is oversimplifying something that is complex, and that will lead to an incorrect understanding of cloud value. I want to avoid having companies with great cloud unit metrics watch value draining from their business and have no idea how it is happening. I’m seeing this today, and while it may be just an annoyance, it could become a real problem as finops leads them in the wrong direction.
As an architect, I’m focusing on returning the most value to the business through the proper configuration of technology. This is sometimes more difficult than many understand, and if I’m chasing windmills called cloud units, I can’t make this work. Finops people, time to begin rethinking this one.
USPTO petitioned to cancel Oracle’s JavaScript trademark 26 Nov 2024, 2:41 am
Deno Land, maker of the Deno runtime for JavaScript, has petitioned the United States Patent and Trademark Office (USPTO) to cancel Oracle’s ownership of the JavaScript trademark. The petition argues that Oracle has abandoned the trademark, because it has not sold any JavaScript goods or rendered any JavaScript services since acquiring the trademark from Sun Microsystems in 2009.
The petition was filed with the USTO November 22. The petition follows a September open letter by Node.js and Deno creator Ryan Dahl, JavaScript creator Brendan Eich, and several other leading members of the JavaScript community that asked Oracle to relinquish the trademark. The letter has drawn more than 14,000 signatures.
Oracle has owned the JavaScript trademark since its 2009 acquisition of Sun Microsystem. However, the company has played no active role in JavaScript’s development or ecosystem, Dahl said in a November 25 blog post. “Our aim is for Oracle to recognize this as a clear case of trademark abandonment and let the cancellation proceed uncontested,” Dahl said in an email. “However, should they choose to fight, we’re fully prepared to present overwhelming evidence that ‘JavaScript’ is a generic term, widely used by developers and organizations without any association with Oracle.”
Calling the trademark ownership an “outdated legal relic,” Dahl said Oracle’s ownership “has caused confusion and unnecessary barriers, including cease-and-desist letters sent to organizations for simply using the term ‘JavaScript’ in their names.” The USPTO filing marks a pivotal step toward freeing the JavaScript name from legal entanglements, Dahl said.
If the name is freed, conferences could use the name JavaScript without concerns of legal overreach. The language’s development specification name, ECMAScript, could be replaced by the name JavaScript, said Dahl.
The petition to the USPTO also accuses Oracle of committing fraud in 2019 in its renewal efforts for the trademark by submitting screen captures of the Node.js website. “Node.js is not affiliated with Oracle, and the use of screen captures of the ‘nodejs.org’ website as a specimen did not show any use of the mark by Oracle or on behalf of Oracle,” the petition states. Additionally, the petition stresses that the term JavaScript is generic and that Oracle does not control, and has never controlled, any aspect of the specification or how the term JavaScript can be used by others. Oracle, which has until January 4, 2025 to respond to the USPTO petition, could not be reached for comment.
”Today, millions of companies, universities, academics, and programmers, including Petitioner, use ‘JavaScript’ daily without any involvement with Oracle,” the petition states. “The phrase ‘JavaScript’ does not belong to one corporation. It belongs to the public.” Oracle’s ownership of the trademark creates marketplace confusion, hinders accurate marketing, and obstructs collaboration in the JavaScript community, the petition maintains.
PHP updates DOM API 25 Nov 2024, 10:25 pm
PHP 8.4.1, an update to the longstanding server-side web scripting language, has been released with features including an updated DOM API that makes it easier to work with HTML5 documents.
Accessible from php.net, PHP 8.4.1 was released November 21. The update also offers better performance, improved syntax, and improved type safety, according to php.net.
The new DOM API, available within the Dom
namespace, includes standards-compliant support for parsing HTML5 documents and fixes longstanding compliance bugs in the behavior of DOM functionality. It also adds functions to make it more convenient to work with documents. Using the new DOM API documents can be created using Dom\HTMLDocument
and Dom\XMLDocument
classes.
Also in PHP 8.4.1, a new BcMath\Number
object enables object-oriented usage and standard mathematical operators when working with arbitrary precision numbers. The objects are immutable and implement the Stringable
interface, enabling them to be used in string contexts like echo $num
.
With PHP 8.4.1, the scope to write to a property now may be controlled independently from the scope to read the property, reducing the need for boilerplate getter methods to expose a property’s value without allowing modification from the outside of a class. Also, property hooks now provide support for computed properties that can natively be understood by IDEs and static analysis tools, without needing to write docblock comments that might be out of sync.
Finally, new lazy objects also are featured in PHP 8.4.1. A lazy object is an object whose initialization is deferred until the state is observed or modified.
Predecessor PHP 8.3 arrived a year ago, bringing typing of class constants. It was followed by multiple point releases.
GitHub Copilot: Everything you need to know 25 Nov 2024, 10:00 am
In 2014, Microsoft Research released an experimental Bing Code Search add-on for Visual Studio and the web. It was a code snippet search tool for C# with a natural language interface, using an index of code from Stack Overflow, MSDN, Dotnetperls, and CSharp411, powered by Bing, running on Azure. The tool included a facility for changing the variable names from those in the snippet to those in your own code, but it didn’t work all that well. The accuracy was 70% to 80% for a single variable substitution, and fell rapidly as more variables needed to be renamed.
The experimental Microsoft Bing Code Search add-on from 2014 eventually evolved into GitHub Copilot. When I reviewed the preview version of GitHub Copilot in 2021, I found that it didn’t always generate good, correct, or even running code, but it was still somewhat useful. At the time, GitHub Copilot was powered by OpenAI Codex, which was based on the GPT-3.5 large language model (LLM), and Copilot considered only the current file for its context.
Two years later (in 2023), I reviewed GitHub Copilot X, a set of technical preview features that extended the original GitHub Copilot with chat and terminal interfaces, support for pull requests, and early adoption of OpenAI’s GPT-4. The GitHub Copilot X preview was greatly improved over the original GitHub Copilot. I found that it could sometimes generate a correct function and set of tests without much human help. It still made mistakes and hallucinated (generated false information), but not nearly as much as it once did.
Since then, GitHub Copilot has continued to get better. It has become more accurate and more reliable, and has added new capabilities including command-line support, code editing, code reviews, and the ability to generate descriptions of changes in pull requests. It has also begun to support additional models beyond OpenAI GPT models.
GitHub Copilot features
The current set of GitHub Copilot features includes generating code suggestions as you type in your IDE, chatting with you about code and related topics (such as algorithms and data structures), and helping you use the command line. If you have an Enterprise subscription, Copilot can generate a description of the changes in a pull request, and manage knowledge bases to use as a context for chats. There are also several features in preview for Copilot Workspace, which we’ll discuss later on.
You can currently use GitHub Copilot in your IDE (integrated development environment), if your IDE is supported (see the list in the next section). You can use Copilot in GitHub Mobile for Android, iOS, and iPadOS, as a chat interface. You can use Copilot on the command line, through the GitHub CLI. And you can use it on the GitHub website through a chat interface, currently marked “beta.” If you have a Business or Enterprise subscription, your administrators will have additional controls, logs, and reports.
Although GitHub Copilot is typically updated monthly, it doesn’t necessarily improve on each update. There have been months when its efficacy, as measured by benchmarks, goes down instead of up. That seems to happen when the model is trained on code in more programming languages or more frameworks, and when it is trained to eliminate some of the ways it goes off the rails. Sometimes the changes are noticeable in ordinary usage, and sometimes they are not. Occasionally there is a big improvement, for example when Copilot started including all open IDE files in its context instead of just the active file, and when OpenAI upgraded the underlying model to a new generation of GPT.
GitHub Copilot integrated with editors
GitHub Copilot is integrated with and officially supported in Azure Data Studio, JetBrains IDEs, Vim/Neovim, Visual Studio, and Visual Studio Code. There is unofficial support for Copilot in emacs, Eclipse, and Xcode. Official support for Apple’s Xcode was announced at GitHub Universe in October 2024.
GitHub Copilot can make inline code suggestions in several ways. Give it a good descriptive function name, and it will generate a working function at least some of the time—less often if it doesn’t have much context to draw on, more often if it has a lot of similar code to use from your open files or from its training corpus.
The same qualifications apply to generating a block of code from an inline comment. Being specific about what you want also helps a great deal. If you say something vague like “sort the list,” it might choose any known sort algorithm, including a bubble sort. If you say “sort the list in-memory using a QuickSort algorithm that drops to an insertion sort for short runs and has a randomized pivot point,” it will probably do exactly what you asked, which will be much more efficient than the bubble sort.
Test generation is generally easier to automate than initial code generation. GitHub Copilot will often generate a reasonably good suite of unit tests on the first or second try from a vague comment that includes the word “tests,” especially if you have an existing test suite open elsewhere in the editor. It will usually take your hints about additional unit tests, as well, although you might notice a lot of repetitive code that really should be refactored. Refactoring often works better in Copilot Chat. Copilot can also generate integration tests, but you may have to give it hints about the scope, mocks, specific functions to test, and the verification you need.
The downside of using GitHub Copilot to generate code or tests is that it isn’t consistent or trustworthy. You have to take the code generated by Copilot (or any LLM) with a huge grain of salt, and plan for a lot of code review and testing, just as you would for a pull request from an unknown programmer, or a check-in from a junior programmer with a drinking problem.
Note that the newest OpenAI models, the o1 family, are much better in this regard than the old GPT family, but not hallucination-free. (See “ChatGPT o1-preview excels at code generation.”)
GitHub Copilot Chat
GitHub Copilot Chat in your editor can give code suggestions, explain code, generate unit tests, and suggest code fixes. It can also iterate with you, almost like a real pair programmer.
In the latest version of GitHub Copilot Chat, you can specify chat “participants” with keywords prefixed with @
, such as @workspace
, and variables prefixed with #
, such as #file
. You can select the model you want to use in Copilot Chat in the “Ask Copilot” box when you submit a query. Currently the options are GPT 4o, o1-mini (preview), and o1-preview (preview), all of which are from OpenAI. The documentation also mentions Claude 3.5 Sonnet, from Anthropic, which rolled out to me as another preview during the course of writing this article. At the GitHub Universe 2024 announcement that discussed new models under the slogan of “developer choice,” the speaker mentioned that Google’s Gemini 1.5 Pro would be added in a few weeks.
As of Visual Studio Code Version 1.95, the GitHub Copilot Chat sidebar has moved from the left, where it had to share space with many other commonly-used functions, to a secondary sidebar at the right. You can also invoke Copilot Chat inline, in a quick chat drop-down menu, and in smart actions from the context menu.
Asking Copilot Chat for /help
(a slash command) currently gives this response:
GitHub Copilot
You can ask me general programming questions, or chat with the following participants which have specialized expertise and can perform actions:
- @workspace – Ask about your workspace
- /explain – Explain how the code in your active editor works
- /tests – Generate unit tests for the selected code
- /fix – Propose a fix for the problems in the selected code
- /new – Scaffold code for a new file or project in a workspace
- /newNotebook – Create a new Jupyter Notebook
- /fixTestFailure – Propose a fix for the failing test
- /setupTests – Set up tests in your project (Experimental)
- @vscode – Ask questions about VS Code
- /search – Generate query parameters for workspace search
- /runCommand – Search for and execute a command in VS Code
- /startDebugging – Generate launch config and start debugging in VS Code (Experimental)
- @terminal – Ask how to do something in the terminal
- /explain – Explain something in the terminal
- @github – Get answers grounded in web search, code search, and your enterprise’s knowledge bases
You can also help me understand your question by using the following variables to give me extra context:
- #selection – The current selection in the active editor
- #codebase – Searches through the codebase and pulls out relevant information for the query.
- #editor – The visible source code in the active editor
- #terminalLastCommand – The active terminal’s last run command
- #terminalSelection – The active terminal’s selection
- #file – Choose a file in the workspace
To have a great conversation, ask me questions as if I was a real programmer:
- Show me the code you want to talk about by having the files open and selecting the most important lines.
- Make refinements by asking me follow-up questions, adding clarifications, providing errors, etc.
- Review my suggested code and tell me about issues or improvements, so I can iterate on it.
You can also ask me questions about your editor selection by starting an inline chat session (⌘K I).
Learn more about GitHub Copilot in Visual Studio Code.
GitHub Copilot Edits (preview)
I can currently see GitHub Copilot Edits in my Visual Studio Code installations (both standard and Insiders), but not in the documentation. It was demonstrated at the GitHub Universe 2024 keynote.
Basically, GitHub Copilot Edits asks you to define a set of files with which you wish to work, and then define the changes you want to make. Copilot Edits runs in the same right-hand sidebar as Copilot Chat. The major difference between the two is that Copilot Edits makes multi-file changes, but Copilot Chat doesn’t, even though Copilot Chat can use multiple files for context.
GitHub Copilot Code Reviews (preview)
GitHub Copilot Code Reviews can review your code in two ways, and provide feedback. One way is to review your highlighted code selection (Visual Studio Code only, open public preview, any programming language), and the other is to more deeply review all your changes (VS Code and GitHub website, public preview with waitlist). Deep reviews can use custom coding guidelines. They are also currently restricted to C#, Go, Java, JavaScript, Markdown, Python, Ruby, and TypeScript.
GitHub Copilot in the CLI
You can use GitHub Copilot with the GitHub CLI to help you with shell commands, as long as the gh
command is installed and up to date. Asking the command for help returns:
% gh copilot --help
Your AI command line copilot.
Usage:
copilot [command]
Examples:
$ gh copilot suggest "Install git"
$ gh copilot explain "traceroute github.com"
Available Commands:
alias Generate shell-specific aliases for convenience
config Configure options
explain Explain a command
suggest Suggest a command
Flags:
-h, --help help for copilot
--hostname string The GitHub host to use for authentication
-v, --version version for copilot
Use "copilot [command] --help" for more information about a command.
GitHub Copilot programming language support
GitHub Copilot provides suggestions for many programming languages and frameworks, but the best support is for Python, JavaScript, TypeScript, Ruby, Go, C#, and C++, since those languages were the most prevalent in the training corpus. GitHub Copilot can also assist in query generation for databases, and in generating suggestions for APIs and frameworks and infrastructure as code.
GitHub Copilot Extensions (public preview)
There are currently 27 GitHub Copilot Extensions that you can add to your account and call from GitHub Copilot Chat by using their @-prefixed name. Examples include @models
and @perplexityai
. While I have been able to install and authenticate these two and some others, I haven’t found them terribly useful so far.
You can write new extensions if you wish. GitHub Copilot Extensions are essentially GitHub Apps with additional read access to GitHub Copilot Chat, integration with the GitHub Copilot API, and optional integration into other LLMs. To publish an extension, it must be owned by an organization account with Verified Creator status. To publish a paid plan for your app on the GitHub Marketplace, your app must be owned by an organization that is a verified publisher.
GitHub Copilot Workspace (private technical preview)
GitHub Copilot Workspace is an “AI-native” development environment that allows you to collaborate with GitHub Copilot on repo-wide coding tasks, using natural language and integrated cloud compute. Copilot Workspace is “task-centric,” meaning that you can start with a GitHub issue, an ad hoc task from the Copilot Workspace dashboard, or an ad hoc task from a repository page. In the first case, the GitHub issue is already defined, so you just use the “Open in Workspace” button to get Copilot Workspace to figure out how to solve it. In the other two cases, you’ll have to define a draft issue and then pass it to Copilot Workspace to solve.
How is GitHub Copilot trained?
GitHub Copilot originally used the OpenAI Codex model, which was essentially GPT-3 additionally trained on lots of open-source code, especially Python code, in GitHub repositories. Later iterations used GPT-4, then GPT 4o, and now a selection of models trained in different ways.
Concerns about GitHub Copilot
The earliest public concerns about GitHub Copilot are summarized in a 2022 class-action lawsuit alleging that GitHub Copilot represents a breach of contract with GitHub’s users and a breach of privacy that shares personally identifiable information. The suit was dismissed by a US District Court judge in San Francisco in July 2024, though the judge declined to dismiss the plaintiffs’ claim for breach of contract of open-source license violations against all defendants.
Apple released a study in October 2024 that concluded that LLMs can’t really perform genuine logical reasoning. Because programming requires logical reasoning, that implies that LLMs can’t code. That fits in with the description of LLMs as “stochastic parrots.” There have also been concerns that the use of OpenAI Codex and similar models may lead students to over-reliance and plagiarism. Others summarize the issue by saying that using models to program makes programmers stupid.
GitHub Copilot competitors
Currently, there are at least a dozen competitors to GitHub Copilot. They include at least Tabnine, Codeium, CodeGeeX, Replit Ghostwriter, Devin AI, JetBrains AI, Sourcegraph Cody, and Amazon Q Developer, if you want to limit yourself to models embedded in code editors. If you broaden the definition of competition to include chat models that can generate code, then you have to consider multiple models from OpenAI, Anthropic, Google, Mistral, Meta, and several other companies. You can also consider Visual Studio Code alternatives, such as Zed and Cursor (see “Two good Visual Studio Code alternatives”), as well as “next-generation” AI coding products, such as Solver and Zencoder.
Prior to the GitHub Universe 2024 conference, I wondered whether GitHub Copilot was being eclipsed by more capable coding plug-ins, such as Tabnine and Amazon Q Developer, or by the likes of Zed, Cursor, Solver, Zencoder, or other up-and-comer. Now I wonder whether any of those other products will be able to leapfrog VS Code and GitHub Copilot. I don’t count the competitors out, though. Stay tuned.
3 data engineering trends riding Kafka, Flink, and Iceberg 25 Nov 2024, 10:00 am
Apache Kafka, Apache Flink, and Apache Iceberg are among the most popular technologies in the data ecosystem. While Kafka enables you to move data around in real time, Flink allows you to process data according to your needs and Iceberg helps you access stored data in a structured and navigable manner so that it’s ripe for querying. All three are influencing how we build data systems.
New features are continuously being added to each of the three tools by their open source software communities, often in collaboration with one another. This means that best practices are constantly evolving. And that means data professionals must stay on top of wider industry trends like the recent increased focus on data governance.
Here are three trends I’ve been seeing lately around the Kafka, Flink, and Iceberg communities. Each presents a new way for engineers to manage data and meet application needs.
Re-envisioning microservices as Flink streaming applications
A common way to process data is to pull it out of Kafka using a microservice, process it using the same or potentially a different microservice, and then dump it back into Kafka or another queue. However, you can use Flink paired with Kafka to do all of the above, yielding a more reliable solution with lower latency, built-in fault tolerance, and event guarantees.
Confluent
Flink can be set to listen for data coming in, using a continuous push process rather than a discrete pull. In addition, using Flink instead of a microservice lets you leverage all of Flink’s built-in accuracies, such as exactly-once semantics. Flink has a two-phase commit protocol that enables developers to have exactly-once event processing guarantees end-to-end, which means that events entered into Kafka, for example, will be processed exactly once with Kafka and Flink. Note that the type of microservice that Flink best replaces is one related to data processing, updating the state of operational analytics.
Use Flink to quickly apply AI models to your data with SQL
Using Kafka and Flink together allows you to move and process data in real time and create high-quality, reusable data streams. These capabilities are essential for real-time, compound AI applications, which need reliable and readily available data for real-time decision-making. Think retrieval augmented generation (RAG) pattern, supplementing whatever model we use with right-in-time, high-quality context to improve the responses and mitigate hallucinations.
Using Flink SQL, you can write simple SQL statements to call a model of your choice (e.g., OpenAI, Azure OpenAI, or Amazon Bedrock). Practically speaking, you can configure any AI model with a REST API for Flink AI to use when processing your data stream. This enables you to use a custom, in-house AI model.
There are infinite use cases for AI, but it is commonly used for classification, clustering, and regression. You could use it, for example, for sentiment analysis of text, or scoring of sales leads.
Beyond its AI capabilities, Flink plays exceptionally well with everyone’s favorite streaming technology, Kafka—you may have heard of it. And I think that’s part of the reason why Flink is going to remain popular and remain the community’s choice for stream processing.
Leveraging community-built Apache Iceberg tools
Community contributions to Iceberg have been strong in recent months as more developers and organizations use this open data format to manage large analytical data sets—especially those stored in data lakes and data warehouses. For example, migration tools have been built to easily move Iceberg catalogs from one cloud provider to another. There are also tools to analyze the health of a given Iceberg instance.
Another contribution by the community is the Puffin format, a blob that lets you add statistics and additional metadata to data managed by an Iceberg table. Even the functionality that lets you send your Iceberg data back into Flink is a result of contributions by Flink and Iceberg community committers.
As more contributors and even vendors join the broader Iceberg community, data value—wherever it lives in your data architecture—will be more accessible than ever. When combined with Kafka/Flink applications and a shift-left approach to governance, Iceberg tables can help dramatically accelerate and scale how you build real-time analytics use cases.
Stay up-to-date with the latest in data streaming engineering
Staying current with the state of the art in Kafka, Flink and Iceberg means keeping an eye on the continuous streams of KIPs, FLIPs, and Iceberg PRs emanating from their respective communities. The dominance of the three technologies in their key functions, as well as the technological synergies among them, means keeping pace with trends and skills in this growing space will be well worthwhile.
Adi Polak is director of advocacy and developer experience engineering at Confluent.
—
New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.
Speed is the killer app 25 Nov 2024, 10:00 am
Why do enterprises so often choose to run applications in the cloud even when it may not be the cheapest option? Why do they turn to open source even if it’s not the most feature-complete choice? And why is generative AI so frothy hot even though, as my InfoWorld colleague David Linthicum argues, “Companies are good at spending money [on AI] but bad at building and deploying AI.”
The answer is speed. As an IT executive friend at a large financial services firm told me recently, it’s expensive to move slowly when market opportunities require fast execution. For him, it’s imperative to build with the cloud’s elastic infrastructure to eliminate the possibility that his company may fail to capitalize on windows of opportunity. This same general principle is driving a number of technology trends, including cloud computing, open source, and generative AI. Each is winning precisely because they help enterprises move faster.
The cost of failure
RedMonk’s Steve O’Grady talks a lot about the power of convenience in driving developer decisions. “For developers convenience trumps most other technology characteristics,” he has written, listing a number of open technologies that owe their early and ongoing success to making life easier for developers: Linux, MongoDB (my employer), Git, MySQL, and more. Each of these took off, to borrow from O’Grady, because of convenience, but also because of speed. Open source is primarily popular because it lowers barriers to using great software, so developers can focus on getting stuff done.
As I’ve noted, “Cloud perfects many of the reasons developers first embraced open source.” With cloud, developers not only get easy, speedy access to code but also to the hardware necessary for running it. As it turns out, this also plays well with enterprise line-of-business owners who are more focused on meeting evolving customer needs than counting pennies. It’s not that budgets don’t matter, it’s just that it doesn’t matter what a service costs to deliver if you’re too late.
That’s how my financial service executive friend sees it. For his company, it’s not an option to deploy their applications to private data centers because they can’t afford delays inherent in scaling private cloud resources. Cost is important but secondary.
Building for success
Nor is his experience atypical. Years ago, following a Gartner analysis of private cloud investments, I noted, “That company-changing app that will make your career? It’s running on AWS. Ditto all the other projects that promise to transform your business and, perhaps, your industry.” The more staid, non-transformational applications have tended to stick with private cloud. One of the primary voices for cloud repatriation, the practice of pulling back from public cloud to return applications to private data centers, is 37signals cofounder David Heinemeier Hansson. He’s spent the past few years trying to convince companies that “renting computers is (mostly) a bad deal for medium-sized companies like ours with stable growth.” That sounds reasonable until you ask, how many companies can realistically plan for predictable growth without any real upside (or downside)? Not many. So it makes sense to optimize for speed with cloud, open source, and now AI.
Just ask developers. AI continues to see more spending than success, to Linthicum’s point, but developers are actively, happily using it to help them build and test code faster. Although not a scientific sampling, Gergely Orosz’s informal survey results are representative of other polls I’ve seen from industry research firms. Most developers increasingly consult genAI tools such as GitHub Copilot to get answers to programming questions. The reason? Speed, convenience, or whatever you choose to call it.
Nor is this just a developer thing. As Eric Colson writes of enterprise misuse of data scientists, “The untapped potential of data scientists lies not in their ability to execute requirements or requests but in their ideas for transforming a business.” Too many enterprises constrain their data scientists to tactical analyses or operational dashboards, he says, which is non-differentiating and not likely to require or drive greater operational speed.
This brings us back to the original point. If an enterprise wants to play it safe, they’ll eschew open source and AI and stick to the comfortable routines of their private cloud infrastructure. That isn’t a recipe for winning. Enterprises that want to win will embrace those things that enable their developers to move more quickly.
Page processed in 0.314 seconds.
Powered by SimplePie 1.3.1, Build 20121030095402. Run the SimplePie Compatibility Test. SimplePie is © 2004–2024, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.