OpenTofu becomes the real deal 24 Mar 2025, 10:00 am

In open source, forks often struggle to break free from the shadow of their progenitors. But OpenTofu, the community-driven Terraform fork born from HashiCorp’s licensing upheaval, is writing a different story. Since January 2024, OpenTofu has transformed from a hopeful manifesto into a thriving project under the Linux Foundation, backed by an enthusiastic community and big-name sponsors. A little over a year in, OpenTofu shows surprising strength—not just in community enthusiasm but also in concrete measures of success such as code contributions, feature delivery, and corporate backing.

With IBM’s acquisition of HashiCorp finally complete, this could be OpenTofu’s moment.

GitHub metrics tell the story

None of this was obvious from the start. At least, not to me. You may remember that I pilloried OpenTofu early on for a lack of big-name cloud support, then made the questionable (read: incorrect) suggestion that OpenTofu might have been following too fast with its fast-follow approach. I was wrong in both instances. OpenTofu still has a ways to go to prove its success, but the signs are positive.

For example, consider GitHub stars. Yes, Terraform still leads comfortably (around 45,000 to OpenTofu’s 23,000), but that gap hides the real action: community engagement. Since its stable launch in January 2024, OpenTofu nearly tripled its contributor base to more than 160. Each release draws a vibrant crowd. Version 1.9 saw 49 contributors submit over 200 pull requests (PRs). Terraform, by contrast, entered 2024 with a massive historical contributor base (more than 1,800 total) but far less new blood. After HashiCorp’s shift to the Business Source License (BSL), community contributions to Terraform plummeted: only ~9% of pull requests came from the community in the month of the license change, down from 21% prior. A year later, Terraform’s GitHub activity remains robust in sheer volume (over 34,000 commits total versus OpenTofu’s ~32,500), but those commits are largely from HashiCorp’s own engineers rather than a committed, buzzing community that builds OpenTofu.

OpenTofu’s issue tracker exemplifies open source at its collaborative best. In one four-month period in late 2024, users opened over 150 issues and submitted more than 200 pull requests. Nor have issues lingered—the community has quickly rallied with solutions. Terraform, meanwhile, still sees plenty of issues opened, but the dialogue is muted, largely managed internally by HashiCorp staff (and soon, those same staff inside IBM). The vibrant collaboration that once marked Terraform now thrives within OpenTofu.

Vibrant community engagement

Stars on GitHub indicate popularity, but real community strength shows up in day-to-day interactions. OpenTofu’s Slack workspace and GitHub Discussions have become hubs of enthusiastic dialogue and rapid feedback. It’s reminiscent of classic open source projects: inclusive, lively, and genuinely responsive. Terraform’s forums, in contrast, feel quiet since the fork.

The shift in developer sentiment is unmistakable. Discussions about new OpenTofu features (such as built-in state encryption or the long-awaited -exclude flag) regularly pop up on Reddit and similar platforms where excitement for OpenTofu’s innovations often outweighs nostalgia for Terraform. This may be one reason we’ve seen projects like Alpine Linux ditching Terraform for OpenTofu: It’s partly a licensing issue and partly about community enthusiasm for what OpenTofu is becoming.

Backing from multiple vendors

What about corporate vendors? It’s still the case that the cloud vendors haven’t (to my knowledge) contributed code to OpenTofu, though each of the big three has quietly ensured compatibility with OpenTofu. More overt cloud support may follow, but for now, companies like Harness, Spacelift, env0, Scalr, and Gruntwork have pledged significant resources—18 full-time engineers collectively for five years. Initially, actual contribution lagged the pledges from 163 companies and nearly 800 individuals who put their names behind the initial manifest. This caused some skepticism. Yet by late 2024, vendor-backed contributors ramped up significantly, making good on their commitments, with companies like Cloudflare and Buildkite chipping in with infrastructure support, further enriching OpenTofu’s ecosystem.

HashiCorp’s Terraform remains strong, of course, especially among enterprise users. But the broader open source world has decisively aligned behind OpenTofu, attracted by its multivendor governance model and genuinely open ethos. For many, this makes OpenTofu a compelling upgrade over Terraform, not just a “good enough” replacement due to licensing.

Accelerated innovation

OpenTofu didn’t just replicate Terraform—it leapfrogged it in areas the community prioritized. It swiftly introduced game-changing features Terraform users had requested for years. Native end-to-end state file encryption arrived early, a devsecops dream unmet by Terraform. Provider iteration (for_each), an -exclude flag for selective applies, and dynamic module sourcing addressed pain points Terraform had left unresolved.

HashiCorp’s own updates haven’t stalled, but their innovation often seems incremental compared to OpenTofu’s aggressive feature rollout. Terraform’s enhancements, such as provider-defined functions and stricter variable validation, are welcome but safe bets. OpenTofu is taking bigger swings, breaking slight compatibility in strategic ways (like introducing the .tofu file extension) to push innovation further.

Additionally, OpenTofu’s new open-source registry (with Git-backed decentralization) signals its intent to build a robust, open ecosystem distinct from HashiCorp’s proprietary registry approach.

Is OpenTofu truly successful?

So, has OpenTofu succeeded as a fork? It depends on how you measure success.

In terms of building a thriving community, absolutely. OpenTofu has rekindled the community-driven spirit Terraform lost after licensing changes. It has active, engaged contributors not beholden to a single vendor. Featurewise, OpenTofu is not just on par—it’s begun pushing past Terraform in meaningful ways.

Real-world adoption, however, is harder to quantify. Terraform still commands massive enterprise mindshare. But OpenTofu’s registry traffic (millions of daily requests) and substantial CLI downloads indicate real traction. Tool vendors like Scalr report sharply increased OpenTofu usage (more than 300% year-over-year growth in registry usage), signaling a meaningful shift beyond mere curiosity.

A complicated but promising path forward

OpenTofu isn’t without challenges. It must sustain momentum, prove itself at enterprise scale, and keep the community growing to avoid dependency on key individuals. But these hurdles reflect genuine progress. OpenTofu has moved well beyond the typical fork fate of stagnation or irrelevance.

Historically, forks struggle when ideology outweighs pragmatism, or licensing debates overshadow real benefits. OpenTofu succeeded precisely because it didn’t fixate on the open source advantage it had over Terraform; instead, it focused on delivering real, community-requested features that users genuinely value. As Redis CEO Rowan Trollope recently argued, “If you’re the average developer, what you really care about is capability: Does this thing offer something unique and differentiated … that I need in my application?” OpenTofu hasn’t rested on its open source laurels, preferring instead to focus on delivering a great product.

None of this implies Terraform is “dead” or even declining in absolute terms. HashiCorp still has a huge customer base and is likely monetizing Terraform more than ever via Terraform Cloud. But in the open source arena, Terraform has undeniably lost its crown to OpenTofu. The community energy around Terraform now largely flows into OpenTofu, and that is the ultimate sign of a successful fork. HashiCorp bet that their ecosystem had no viable alternative; the community answered by creating one. It’s a remarkable feat, one that just might turn into hefty enterprise adoption.

(image/jpeg; 0.09 MB)

Learning AI governance lessons from SaaS and Web2 24 Mar 2025, 10:00 am

The experimental phase of generative AI is over. Enterprises now face mounting pressure — from boardrooms to the front lines — to move AI into production to streamline operations, enhance customer experiences, and drive innovation. Yet, as AI deployments grow, so do its reputational, legal, and financial risks.

The path forward is clear. After all, good governance is good business. Gartner expects enterprises that invest in AI governance and security tools to achieve 35% more revenue growth than those that don’t. But many leaders are unsure where to start. AI governance is a complex, evolving field, and navigating it requires a thoughtful approach. Fortunately, lessons from the governance journeys of SaaS and Web2 offer a proven roadmap.

AI governance challenges

AI governance isn’t just a technical hurdle — it’s a multifaceted challenge. Gaining visibility into how AI systems interact with data remains difficult, because AI systems often operate as black boxes, defying traditional auditing methods. Solutions that have worked in the past, such as observability and periodic reviews of development practices, don’t mitigate the risks of unpredictable behavior nor prove acceptable use of data when applied to large language models (LLMs).

Complicating matters further is AI’s rapid evolution. Autonomous systems are advancing quickly, with the emergence of agents capable of communicating with each other, executing complex tasks, and interacting directly with stakeholders developing. While these autonomous systems introduce exciting new use cases, they also create substantial challenges. For example, an AI agent automating customer refunds might interact with financial systems, log reason codes for trends analysis, monitor transactions for anomalies, and ensure compliance with company and regulatory policies — all while navigating potential risks like fraud or misuse. 

The regulatory landscape also remains in flux, particularly in the U.S. Recent developments have added complexity, including the Trump administration’s recent repeal of Biden’s AI Executive Order. This will likely lead to an increase in state-by-state legislation over the coming years, making it difficult for organizations operating across state lines to predict the specific near-term and long-term guidelines they need to meet. Recent developments like the Bipartisan House Task Force’s report and recommendations on AI governance have highlighted the lack of clarity in regulatory guidelines. This uncertainty leaves organizations struggling to prepare for a patchwork of state-specific laws while managing global compliance demands like the EU AI Act or ISO 42001.

In addition, business leaders face numerous governance frameworks and approaches, each optimized to address different challenges. This abundance of approaches forces business leaders into a continuous cycle of evaluation, adoption, and adjustment. Many organizations resort to reactive, resource-intensive processes, creating inefficiencies and stalling AI progress.

It’s time to break the cycle. AI governance must evolve from reactive to proactive to drive responsible innovation.

From reactive to proactive governance

This ad hoc approach to AI governance mirrors the initial paths of SaaS and Web2. Early SaaS and Web2 companies often relied on reactive strategies to address governance issues as they emerged, adopting a “wait and see” approach. SaaS companies focused on basics like release sign-offs, access controls, and encryption, while Web2 platforms struggled with user privacy, content moderation, and data misuse.

This reactive approach was costly and inefficient. SaaS applications scaled with manual processes for user access management and threat detection that strained resources. Similarly, Web2 platforms faced backlash over privacy violations and inconsistent enforcement of policies, which eroded trust and hampered innovation.

The turning point for both industries came with the adoption of continuous, automated governance. SaaS providers implemented continuous integration and continuous delivery (CI/CD) pipelines to automate the testing of software and deployed tools for real-time monitoring, reducing operational burdens. Web2 platforms implemented machine learning to flag inappropriate content and detect fraud at scale. The results were clear: improved security, faster innovation, and lower costs. 

AI is now at a similar crossroads. Manual, reactive governance strategies are proving inadequate as autonomous systems multiply and data sets grow. Decision-makers frustrated with these inefficiencies can look at the shift toward automation in SaaS and Web2 as a blueprint for transforming AI governance within their organizations. 

Continuous and automated AI governance

A continuous, automated approach is the key to effective AI governance. By embedding tools that enable these features into their operations, companies can proactively address reputational, financial, and legal risks while adapting to evolving compliance demands.

For example, continuous, automated AI governance systems can track data to ensure compliance with the EU AI Act, ISO 42001, or state-specific legislation such as the Colorado AI Act. These systems can also reduce the need for manual oversight, allowing technical teams to focus on innovation rather than troubleshooting. 

As organizations increasingly integrate AI into their operations, the stakes for effective governance grow higher. The companies that adopt governance strategies focused on continuous and automated monitoring will gain a competitive edge, reducing risks while accelerating deployment. Those that don’t risk repeating the costly mistakes of SaaS and Web2 — falling behind on compliance, losing customer trust, and stalling innovation.

The message is clear: A continuous, automated approach to governance isn’t just a best practice — it’s a business imperative.

Greg Whalen is CTO of Prove AI.

Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.

(image/jpeg; 7.31 MB)

Prompt engineering courses and certifications tech companies want 24 Mar 2025, 10:00 am

Prompt engineering is the process of structuring or creating an instruction to produce the best possible output from a generative AI model. Industries such as healthcare, banking, financial services, insurance, and retail all have use cases for prompt engineering, says the report. Prompt engineering serves various applications, including content generation, problem-solving, and language translation, among others, and helps genAI models respond to a range of queries, it says.

Factors driving the market growth in prompt engineering include technological advancements in genAI and related fields, along with the growing digitalization and automation in various industries. The report says the growing adoption of AI— especially natural language processing (NLP)—is boosting the demand for prompt engineers.

Prompt engineering certification and hiring

As software developers and others integrate prompt engineering into their AI-enabled workflows, professional courses and certifications are bridging the knowledge gap, and some hiring managers are taking notice.

A multitude of prompt engineering courses and certifications offer candidates the opportunity to learn new skills in artificial intelligence, genAI, and other areas. Such courses or certifications can make a job applicant more attractive to organizations looking to hire people with these types of skills.

“Prompt engineering certifications can have a huge impact on the hiring process,” says Jason Wingate founder and CEO of Emerald Ocean, a holding company. As prompt engineering is fairly new, human resources can’t rely on years of experience or the completion of a four-year degree in prompt engineering, Wingate says, so certifications become the next best thing.

“We are at the early stage of how prompt engineering certifications get included in the recruiting process,” says Neil Costa, founder and CEO at HireClix. a global recruitment marketing agency. “I think it’s important that formal education programs and certifications are spawning and rapidly evolving, so there can be a measure of validation in this new area of skill development.”

John Yensen, president of managed IT services provider Revotech Networks, says prompt engineering certifications can have a large impact on the hiring process, but their value largely depends on context.

“For example, in industries where AI drives automation, customer support, or even content generation, certified professionals might stand out more,” Yensen says. “With that said, hands-on experience demonstrating real-world problem-solving skills with AI stands out even more. Of course, certifications can help verify a candidate’s foundational knowledge. But hiring managers will still look for practical expertise.”

Certification vs. hands-on experience

Others downplayed certifications for prompt engineering. “I don’t think certifications are the key factor in hiring for AI-related roles, especially for a field like prompt engineering,” says Ximena Hartsock, founder of BuildWithin, a company that helps organizations build apprenticeship, upskilling, training, and mentoring programs.

“What matters most is hands-on experience, playing with the available tools, experimenting, and learning by doing,” Hartsock says. “AI models are improving rapidly, and fine-tuning is becoming less critical than it once was. The real skill is understanding how to work with the AI tools effectively.”

Prompt engineering is more about language than code, Hartsock says. “My suggestion for anyone trying to break into the industry is to learn through the many free resources available,” she says.

Prompt engineering certifications are still finding their place in hiring, says Damien Filiatrault, founder and CEO of Scalable Path, a software staffing agency with a network of over 39,000 developers. “While they can signal AI literacy, hiring managers—especially in technical fields—still prioritize hands-on experience over a certificate.”

For some roles, such as AI-integrated marketing or customer support, a certification might be helpful in demonstrating structured knowledge, Filiatrault says. “However, in software engineering, data science, or machine learning, companies still expect candidates to show practical problem-solving skills rather than rely on formulaic prompt design,” he says.

“We’ve found that businesses benefit more from internal AI upskilling than relying on external certifications,” Filiatrault says. “Many AI-first companies are training employees on domain-specific AI applications rather than hiring based on standalone prompt engineering credentials.”

How certification can accelerate career development

Prompt engineering certificates can provide benefits for both individuals and organizations.

“It goes without saying that certifications can help accelerate career development by showcasing specialized skills in AI model optimization and prompt refinement,” Yensen says. “Certified employees can help improve efficiency in AI-driven workflows, which helps reduce costs and increase productivity,” and this is a benefit for all organizations.

“For example, companies that integrate AI into customer service can benefit from employees trained to craft precise, high-performing prompts, improving response accuracy and user experience,” Yensen says.

One of the biggest benefits of prompt engineering certifications for professionals is the fact that the field is so new. “It’s almost ‘all there is’ in terms of having a proven track record,” Wingate says. “For an individual that means potential career advancement, enhanced skills, and overall recognition.”

For employers, a prompt engineering certification can provide a reliable and efficient way to validate a candidate’s skills in very new and fresh field, Wingate says.

Prompt engineering certifications and courses

The certifications that are most demanded today tend to come from recognized technology leaders and educational platforms, Yensen says. “OpenAI, DeepLearning.AI, and Microsoft’s AI certifications have gained traction due to their alignment with industry tools and best practices,” he says. I anticipate that more specialist certifications will continue to emerge for different applications as AI adoption increases.”

Here are some of the most recognized prompt engineering certifications and courses.

AI+ Prompt Engineer Level 1

The AI+ Prompt Engineer Level 1 Certification Program introduces students to the fundamental principles of AI and prompt engineering. Covering the history, concepts, and applications of AI, machine learning, deep learning, neural networks, and NLP, the program also delves into best practices for designing effective prompts that harness the capabilities of AI models to their fullest potential.

AI Foundations: Prompt Engineering with ChatGPT

Offered by Arizona State University via Coursera, this prompt engineering course offers students an opportunity to delve into ChatGPT and large language models (LLMs). Students learn to evaluate prompts and create impactful ones, maximizing the potential of ChatGPT. Designed by Andrew Maynard, an expert in transformative technologies, the course covers prompt templates, creative prompt structures, and designing prompts for various tasks and applications.

AWS’s Essentials of Prompt Engineering

In this course from Amazon Web Services (AWS), participants are introduced to the fundamentals of crafting effective prompts. They gain an understanding of how to refine and optimize prompts for a range of use cases, and also explore techniques such as zero-shot, few-shot, and chain-of-thought prompting. Finally, students learn to identify potential risks associated with prompt engineering.

Blockchain Council’s Certified Prompt Engineer

The Certified Prompt Engineer certification program from Blockchain Council offers an overview of AI and prompt engineering and a deep understanding of prompt engineering fundamentals, including the principles and techniques of effective prompt engineering. Obtaining the Certified Prompt Engineer certification validates an individual’s knowledge and skills in prompt engineering, according to the council.

ChatGPT Masterclass: The Guide to AI & Prompt Engineering

This course offered by Udemy covers topics including how to apply ChatGTP to prompt engineering, task automation, code, digital marketing, optimizing workflows, creating content, and building websites.

ChatGPT Prompt Engineering for Developers

Offered by Deeplearning.ai in partnership with OpenAI, this course teaches students how to use a large language model to quickly build new and powerful applications. Using the OpenAI API, they will be able to quickly build capabilities that learn to innovate and create value in ways that were cost-prohibitive, highly technical, or impossible before now. This short course describes how LLMs work, provide best practices for prompt engineering, and shows how LLM APIs can be used in applications for a variety of tasks.

Generative AI: Prompt Engineering Basics

This course from IBM explains the concept and relevance of prompt engineering in generative AI models; applies best practices for creating prompts and exploring examples of impactful prompts; practices common prompt engineering techniques and approaches for writing effective prompts; and explores commonly used tools for prompt engineering.

Google Prompting Essentials

Participants in this course practice the steps to write effective prompts, including applying prompting techniques to help with everyday work tasks, using prompting to speed up data analysis and build presentations, and designing prompts to creating AI agents to role-play conversations and get expert feedback.

MIT’s Applied Generative AI for Digital Transformation

The Applied Generative AI for Digital Transformation course, offered by MIT Professional Education, delves into how generative AI generates original and innovative content, propelling an organization’s digital transformation efforts, according to MIT. By combining technical knowledge with management perspectives, ethical concerns, and human elements, the eight-week program provides a comprehensive understanding of AI-driven digital transformation strategies.

Vanderbilt’s Prompt Engineering Specialization

This course from Vanderbilt University is designed to enable students to master prompt engineering patterns, techniques, and approaches to effectively leverage generative AI. Areas covered include ChatGPT, genAI, advanced data analysis, problem formulation for genAI, chain of thought prompting, prompt patterns, and LLMs.

(image/jpeg; 0.52 MB)

Kotlin bolsters K2 compiler plugin support, WebAssembly debugging 22 Mar 2025, 2:53 am

Kotlin 2.1.20, just released as the latest version of JetBrains’ cross-platform programming language, improves both plugin support for the K2 compiler and Kotlin/Wasm debugging.

The Kotlin update was announced March 20. Instructions on updating can be found at kotlinlang.org.

For the K2 compiler, Kotlin 2.1.20 brings updates to the new kapt and Lombok compiler plugins, including having the kapt compiler plugin enabled by default for all projects. Kotlin’s builders have been improving performance of the kapt annotation processing compiler dating back to Kotlin 1.9.20, from October 2023. But kapt has been in maintenance mode, meaning JetBrains keeps it up-to-date with Kotlin and Java but has no plans to add new features. The experimental Lombok plugin, for using Java Lombok declarations, now supports the @SuperBuilder annotation, thus making it easier to create builders for class hierarchies.

With Kotlin/Wasm, for compiling Kotlin code into the WebAssembly format, Kotlin 2.1.20 improves property usage and debugging. Custom formatters now work out of the box in development builds and DWARF (debugging with arbitrary record format) facilitates code inspection. DWARF data can be embedded into the embedded WebAssembly binary. Many debuggers and virtual machines can read this data to provide insights into compiled code.

Also in Kotlin 2.1.20:

  • The Compose compiler relaxes some restrictions on @Composable functions introduced in prior releases. Also, the Compose compiler Gradle plugin is set to include source information by default, aligning the behavior on all platforms with Android.
  • The standard library introduces experimental features including common atomic types, improved support for UUIDs (Universally Unique Identifiers), and new time-tracking functionality.
  • For Kotlin/Native, a new inlining optimization pass is being introduced.
  • Kotlin plugins supporting Kotlin 2.1.20 are bundled into the latest IntelliJ IDEA and Android Studio IDEs.
  • For Kotlin Multiplatform, a new DSL (domain specific language) replaces the Gradle Application plugin.

(image/jpeg; 9.77 MB)

OpenSilver extends to iOS and Android 21 Mar 2025, 10:09 pm

Userware has updated its OpenSilver open source UI framework for .NET, expanding the reach of Windows Presentation Foundation (WPF) to mobile apps.

OpenSilver is best known as a replacement for Microsoft Silverlight, a rich internet application framework that, like Adobe Flash, required a browser plugin. OpenSilver apps run in browsers without a plugin.

OpenSilver 3.2, introduced March 18 and downloadable from the project website, expands the reach of the framework beyond browsers to mobile platforms by integrating .NET MAUI Hybrid. This approach combines consistency of web-based UI rendering with the power of the .NET runtime, Userware said, and allows developers to deploy WPF-compatible applications to iOS, Android, Windows, Mac, Linux, and the web from a single code base. OpenSilver 3.2 uses .NET MAUI Hybrid to deliver a WPF-compatible UI through WebView while compiling C# business logic to native code. A single XAML/C# code base is maintained across platforms.

Also with OpenSilver 3.2, two sample applications are featured, demonstrating implementation across platforms and native platform API calls. Userware in its blog post called out OpenSilver integration with the Visual Studio IDE and Visual Studio Code editor, with extensions for these tools available in the Visual Studio Marketplace and the VS Code Marketplace. OpenSilver 3.2 follows the December 2024 release of OpenSilver 3.1, which introduced an XAML designer for VS Code.

OpenSilver 3.2 also improves scrolling on touch devices, with implementations of panning support and scroll inertia. The release also implements an event manager, layout mirroring, and support for mixed LeftToRight and RightToLeft flow direction, adds support for multiple targets to Storyboards, and adds support for animations without Storyboard on UIElements. And ItemsControl now can adjust the scrolling speed, which can make scrolling smoother when displayed items are very large, Userware said.

(image/jpeg; 20.27 MB)

Nvidia launches AgentIQ toolkit to connect disparate AI agents 21 Mar 2025, 11:27 am

As enterprises look to adopt agents and agentic AI to boost the efficiency of their applications, Nvidia this week introduced a new open-source software library — AgentIQ toolkit — to help developers connect disparate agents and agent frameworks.

The toolkit, according to the chipmaker, packs in a variety of tools, including ones to weave in RAG, search, and conversational UI into agentic AI applications.

[ Related: Nvidia GTC 2025: News and insights ]

“It’s essentially a connectivity layer for AI agents. It lets you connect, profile, and optimize teams of AI agents that were built using different frameworks including our own platform,” said Paul Chada, co-founder of agentic AI-based software-providing startup DoozerAI.

While Nvidia’s intention with the toolkit is to help enterprises break down silos between different agent systems, According to Chada, several similar systems can offer similar capabilities. These include LangChain, CrewAI, and Microsoft’s Semantic Kernel.

[ Related: Agentic AI – Ongoing news and insights]

The Nvidia toolkit also includes a configuration builder that allows developers to prototype new Agentic AI applications and a set of reusable tools, pipelines, and agentic workflows to ease the development of agentic AI systems.

In addition, the toolkit comes with telemetry, profiling, and optimizing tools to enhance the accuracy and performance of the agentic systems being built, the chipmaker said, adding that developers can use Dynamo to accelerate agent performance further.

How does it help developers and enterprises?

AgentIQ, according to analysts and experts, is business friendly as it is open source and doesn’t require modifications to be shared back with the community.

For developers, Chada said that AgentIQ will act as a time-saver allowing easier orchestration between different agent frameworks. For enterprises, the toolkit kit will provide enterprises visibility into their AI operations helping identify bottlenecks and reduce response times dramatically.  

The Futurum Group’s lead of CIO practice said that AgentIQ’s granular telemetry data is highly valuable for enterprises looking to fine-tune AI performance and cost-efficiency in real time, especially with agents that interface with the real world.

According to IDC’s research vice president Arnal Dayaratna, AgentIQ will also allow enterprises to build custom agents that reflect the specificities of their own organization’s business processes and workflows instead of having to configure commercial, off-the-shelf agents that are being offered by some vendors.

AgentIQ could also help enterprises avoid vendor lock-in by creating interoperability between different AI agent platforms. “Rather than forcing businesses to choose one ecosystem and stick with it, Nvidia’s approach lets them use multiple frameworks side-by-side. They can pick the best tools for specific tasks while still having everything work together seamlessly,” Chada said.

Considerations before adoption

Enterprises may need to consider a few things before looking to adopt AgentIQ.

“AgentIQ’s development framework approach requires coding and the adoption of a new design framework, which creates a significant barrier for many potential users or enterprises,” Chada said. Some enterprises might find vendors who offer the ability to create AI agents without coding more attractive as a no-code approach democratizes agent creation, allowing business users and subject matter experts to build functional AI agents through intuitive interfaces rather than requiring developer resources, Chada said.

(image/jpeg; 10.17 MB)

Bridging the digital skills gap 21 Mar 2025, 10:00 am

A recent McKinsey study highlights that 87% of executives now view skill shortages as a critical barrier to their digital transformation efforts. The digital skills gap hinders a company’s ability to fully leverage advanced technologies such as artificial intelligence, cloud computing, data analytics, and cybersecurity. Although this lack of qualified people is not a new problem, its impact has grown more severe over time. The situation impedes innovation and introduces security risks that delay critical technology-driven projects, all of which costs businesses billions of dollars in value each year.

Enterprises must adopt a much more practical approach, focusing on deliberate investment, accountability, and measurable outcomes. Half-hearted attempts, such as sporadic training programs or unstructured partnerships with educational institutions, will no longer suffice in an environment where technology evolves faster than most organizations can adapt. Businesses must step back, evaluate their efforts, and implement a comprehensive strategy that yields tangible results.

Is it really costing billions?

The lack of skilled professionals on staff limits enterprises in countless ways. First, project timelines stretch out longer than they should. For instance, companies transitioning to the cloud face delays while they search for employees with specialized migration skills or hire consultants with specific skill sets. Second, cybersecurity risks soar due to insufficient capabilities to secure increasingly digitized infrastructure.

According to the IBM 2024 X-Force Threat Intelligence Index, cybersecurity threats have grown nearly 38% year over year—driven, in part, by a global shortage of 3.4 million qualified cybersecurity professionals. Inadequate security doesn’t come from technological failures but from a scarcity of individuals who understand how to protect the business.

Beyond these immediate challenges, enterprises suffer from diminished innovation. Advanced technologies such as AI and machine learning are reshaping industries, yet according to IDC, nearly 50% of businesses cite skill shortages as the primary hurdle to their adoption. In an era where agility and adaptability define longevity, these deficits erode competitiveness and value.

Invest in people and hold leadership accountable

Addressing the digital skills gap starts with a shift in mindset. It’s time to treat workforce development as a priority investment that directly correlates with enterprise growth and revenue rather than a crisis to fix when it’s convenient. This means approaching the problem with pragmatism and discipline, underpinned by data-backed strategies. Here are some steps to ensure enterprises take the right course of action:

Companies need to allocate substantial and sustained budgets toward ongoing workforce training. Investments should enhance employees’ technical expertise and their abilities to adapt to new tools and processes. Large-scale skilling initiatives rolled out by tech giants like Microsoft and Amazon can serve as a model. Subscriptions to Coursera, Udacity, and LinkedIn Learning can further augment learning opportunities.

Upskilling must also extend beyond IT departments. By fostering comprehensive digital fluency across all departments and functions, organizations can cultivate a more adaptable and innovation-driven culture.

Partnerships with universities and technical academies are common, but enterprises need to engage strategically. Collaborations should focus on creating tailored, industry-relevant programs rather than general skill-building courses that fail to align with real-world enterprise needs.

For example, focus on specialized areas such as cloud architecture, AI model engineering, and data analytics to ensure that workforce competency aligns with an organization’s trajectory. Additionally, providing internships and apprenticeships that immerse students in enterprise environments establishes a clear pipeline of qualified talent.

Too many businesses approach the digital skills deficit without measurable goals. Enterprises often pour resources into training programs without examining their return on investment. Decision-makers must establish key performance indicators that assess the efficacy of skill-development initiatives.

Turn challenges into opportunities

Savvy enterprises will approach the digital skills gap as an urgent challenge and a massive opportunity. Companies can address the current situation by investing in workforce development while establishing a foundation for ongoing resilience and innovation. Consider the cumulative impact of proactive effort: Employees who acquire new skills will naturally advocate for change within their organizations. Teams with improved technical abilities foster faster adoption of cutting-edge technologies. Costly project delays shrink, giving the business increased agility in a competitive landscape.

Businesses can transform a common pain point into a competitive advantage by investing intentionally and strategically in workforce development, establishing accountability structures, and linking outcomes to KPIs. However, achieving success demands commitment and discipline—not policies that merely sound appealing.

As the gap between technological advancement and workforce proficiency widens, enterprises that fail to act will perpetually be playing catch-up. On the other hand, those that approach the issue pragmatically and invest wisely in their people will unlock the billions of dollars in value they’ve been leaving on the table.

(image/jpeg; 0.46 MB)

Everyone needs a genAI strategy now 21 Mar 2025, 10:00 am

Does your company have an AI strategy yet? Our top stories this month point to the perils of not having one—including but not limited to the rising threat of shadow IT. We’re also looking ahead to the future of highly adaptive UIs, the rise of citizen developers, and what’s happened to developer communities like Stack Overflow. All are impacted by genAI, and the effects are still playing out.

Top picks for generative AI readers on InfoWorld

Building generative AI? Get ready for generative UI
Chatbots may be the popular face of genAI today, but the future points to a different kind of interface—one that adapts to user needs on the fly.

Using generative AI tools to simplify app migrations
Migrating legacy apps to updated platforms involves a lot of thankless grunt work. Good thing you can outsource much of it to genAI.

The rising threat of shadow AI
Even at companies that don’t embrace AI, workers are using ChatGPT and other public LLMs to streamline their workflows and boost productivity. The risk of exposing corporate data is higher than some realize.

AI can give you code but not community
Stack Overflow has been an important resource and community-building space for developers for nearly two decades. Now, many programming newbies are turning to AI instead.

More good reads and generative AI updates elsewhere

How LLMs can make FFmpeg easier to use
FFmpeg, the command-line tool for converting video and audio files, is both “ridiculously powerful and ridiculously complex.” Here’s how LLMs can streamline its operation.

Has the era of citizen developers arrived?
Generative AI lets non-programmers dream up apps, describe them in natural language, and produce results that (more or less) work. Are we ready for what comes next?

Walmart doubles down on AI
Walmart might be the world’s largest brick-and-mortar retailer. Turns out, it also employs thousands of developers building internal AI tools.

(image/jpeg; 11.44 MB)

Developers: apply these 10 mitigations first to prevent supply chain attacks 21 Mar 2025, 2:04 am

DevOps leaders hoping to find a single cybersecurity risk framework that will prevent their work from experiencing the kinds of compromises that lead to supply chain attacks will have a hard time, according to a new research paper.

In a paper submitted to Cornell University’s arXiv site for academic manuscripts, the six researchers — four from North Carolina State University, one from Yahoo and one between positions — said they could rank the top tasks that application development teams should perform to blunt possible compromises in their work that might lead to their applications being used to attack users.

They did it by mapping the 114 reported techniques used in compromising three vital apps, SolarWinds Orion, log4J and XZ Utils, against the 73 recommended tasks listed in 10 software security frameworks, including the US NIST Secure Software Development Framework.

However, the researchers added, three mitigation factors were missing from all 10 frameworks. That suggests that no one framework will close all the potential holes in an application. The three missing elements are:

  • making sure open source software is sustainable;
  • having environmental scanning tools;
  • and making sure application partners report their vulnerabilities.

Johannes Ullrich, dean of research at the SANS Institute agreed.

“None of [the frameworks] is perfect,” he said in an email, “and that is OK. The software supply chain can’t be secured in isolation. DevOps leaders must talk to the rest of the enterprise to see where the gaps are that they need to fill. These frameworks are a starting point for that discussion.

“As for the three gaps, it depends a bit on the scope of your software supply chain security effort. For example, they [the researchers] do not consider ‘open source software’ a supplier, as there is no contractual relationship. I think there is a contractual relationship, even if often a weak one, governed by the various open source licenses. I don’t think that is fundamentally different compared to commercial software. Commercial suppliers may ‘disappear’ or stop supporting a particular piece of software at any time (which I think is where they are going with this control).”

Environmental Scanning Tools, another missing mitigation, is often part of vulnerability management, Ullrich added. But, he said, sometimes other activities can fill the gap. For example, ‘Response Partnership’ is often part of the incident response framework, and collaboration is often also part of threat intelligence.

“You can always find gaps in frameworks if you extend their use beyond what they are originally designed to do,” he concluded, “and again, they need to be consistently updated.”

Worst supply chain attacks

The paper, Closing the Chain: How to reduce your risk of being SolarWinds, log4J or XZ Utils, deals with three of the worst supply chain compromises in recent years.

  • Solar Winds: As we reported, Microsoft believed “at least 1,000 very skilled, very capable engineers” worked on the hack, which involved inserting malicious code dubbed Sunburst into the software updates for SolarWinds’ Orion network management suite. SolarWinds said about 18,000 firms downloaded the updates, and of them, about 100 were compromised;
  • Log4j: Attackers exploited a flaw (CVE-2021-44228), dubbed Log4Shell, in Apache’s open source log4j logging utility. It was rated 10 out of 10 on the CVSS vulnerability rating scale, and could lead to remote code execution (RCE) on underlying servers. Because of its ubiquity in a wide range of applications, it isn’t clear how many IT networks were compromised;
  • XZ Utils is a data compression utility, part of major Linux distributions. The installation of a backdoor was caught before it could do widespread damage, we reported last year.

The researchers wanted to prioritize all of the recommended tasks in 10 security software development frameworks by looking at the tactics threat actors used in these three hacks, indicating current framework tasks that could have mitigated those attacks. The work would also show gaps in the frameworks that leave code vulnerable to attacks.

They analyzed 106 cyber threat intelligence (CTI) reports of the techniques used in the three attacks, then mapped them to 73 best practice tasks in the frameworks that developers should be performing. Finally, they ranked priority tasks that would best mitigate the attack techniques.

While there were 114 unique attack techniques across the three hacks, 12 of them were common, including exploiting trusted relationships, obfuscating data, and compromising infrastructure. They also found that 27 of the recommended 73 best practices could have mitigated the three attacks.

However, they added, three of the 27 recommended mitigation tasks were not included in any of the frameworks; they included using sustainable open source software and the use of environmental scanning tools.

“Thus, software products would still be vulnerable to software supply chain attacks even if organizations adopted all recommended tasks,” they concluded.

Starter kit of mitigations

What the work did allow the researchers to do is create a ‘starter kit’ of 10 defensive tactics developers should adopt, based on the highest mitigation scores in their research. Taken from the Proactive Software Supply Chain Risk Management Framework (P-SSCRM), the 10 are:

  • role-based access control
  • continuous system monitoring
  • monitoring and controlling communications at the external boundary of the system and at key internal boundaries
  • monitoring changes to configuration settings
  • enabling authentication for employees and contractors
  • updating vulnerable dependencies when a fixed version is available
  • enumerating possible threat vectors through threat modelling and attack surface analysis
  • limiting the information flow across trust boundaries to participants in the supply chain
  • protecting information at rest
  • remediating vulnerabilities, prioritizing based upon risk.

These 10 mitigations apply to broader software security rather than being specific to the software supply chain security, the researchers added. “Before mitigating software supply chain attacks, common software security tasks should be addressed,” they emphasized.

In an interview, report co-author Sivana Hamer acknowledged that all of the 10 frameworks studied have gaps in the mitigations that should have applied to the three hacked applications. “None of the frameworks are supposed to provide a complete view of security,” she said. “All have a different notion, like one is more focused on build environments.”

The ‘starter kit’ of mitigations is the list of security tasks that developers should prioritize, she said.

(image/jpeg; 0.17 MB)

Microsoft .NET 10 Preview 2 shines on C#, runtime, encryption 20 Mar 2025, 11:41 pm

Microsoft has unveiled a second preview of its planned .NET 10 developer platform, featuring enhancements related to encryption, the .NET runtime, and the C# language.

Published March 18, .NET 10 Preview 2 follows the February 25 debut of Preview 1. Preview 2 can be downloaded from dotnet.microsoft.com. A general release of .NET 10 is expected in November.

In .NET 10 Preview 2, new ExportPkcs12 methods on X509Certificate2 now allow callers to choose what encryption and digest algorithms are used to produce the output. The previous method indicated the Windows XP-era de facto standard, which generally resulted in using an older encryption algorithm.

Also in .NET Preview 2, C# 14 rounds out the set of partial members by adding partial instance constructors and partial events. These new partial member types join partial methods and partial properties that were added in C# 13. Partial members let one part of a class declare a member, which can then be implemented in another part of the same class, often in a different file. Partial members often are used by source generators.

As for .NET runtime improvements, Preview 2 continues an effort to enhance the JIT compiler’s devirtualization capabilities for array interface methods. This effort was started in Preview 1 and continues with additional improvements and optimizations.

Elsewhere in .NET 10 Preview 2:

  • The dotnet CLI tool has added a few new aliases for commonly used but often forgotten commands. New commands include dotnet package add, dotnet package list, dotnet package remove, dotnet reference add, dotnet reference list, and dotnet reference remove. The new aliases are provided to make the commands easier to remember and type.
  • The Blazor Web App project template now includes a ReconnectModal component, which includes collocated stylesheet and JavaScript files. This is intended to  improve developer control over the reconnection UI when the client loses the WebSocket connection to the server.
  • Quality improvements were made to .NET for Android and to .NET for iOS, Mac Catalyst, macOS, and tvOS.
  • Performance of WPF (Windows Presentation Foundation) has been improved by replacing data structures and optimizing method operations.
  • NativeAOT apps now have quicker startup time and smaller memory footprints, and they can run on machines that do not have the .NET runtime installed.

(image/jpeg; 4.19 MB)

Ex-Sun CEO Scott McNealy reflects on Java’s founding 20 Mar 2025, 4:37 pm

Languages such as Python and Rust get a lot of new attention these days, but Java is still going strong after three decades. With Java’s 30th birthday two months away, Scott McNealy, former CEO of Java founding company Sun Microsystems, and Oracle officials reflected on the programming language’s staying power at the JavaOne 2025 conference this week.

In a keynote presentation March 18 featuring many speakers, McNealy recalled the start of Java, precipitated by the hiring of Java founder James Gosling. McNealy recalled that Gosling wanted to build an integrated clicker, i.e., a TV set-top box, which needed a language and OS. The language  became Java. “We never did ship a clicker,” said McNealy at the Redwood Shores, Calif., conference. Then, a meeting with Marc Andreessen of browser builder Netscape resulted in hundreds of thousands of Java downloads in the browser in the first few weeks; McNealy said this led to the birth of the Internet. “And Java is still top three [among languages] in the world 30 years later, and I think it’s number one for people who are doing real work, and doing enterprise,” McNealy said. Java will turn 30 years old on May 23.

Oracle officials also championed Java, whose stewardship the company took over in 2010 after acquiring Sun. “Today, the world truly runs on Java,” said Georges Saab, senior vice president of Java development at Oracle. Ninety-four of the Fortune 100 run Java, Saab said. He cited users including Uber, Netflix, and LinkedIn. Mark Reinhold, chief architect of the Java platform at Oracle, chimed in. “After three decades, Java remains one of the most popular programming platforms in the world,” he said, adding that Java is used by millions of developers to build mission-critical systems for organizations large and small.

Stewardship of the Java platform is guided by two key values: readability and compatibility, Reinhold said. “We will evolve the language but cautiously, with a long-term view,” he said.

(image/jpeg; 19.25 MB)

TypeScript gets Go-faster stripes 20 Mar 2025, 10:00 am

Last week Microsoft announced a major shift in the architectural direction of its TypeScript language. Until now the TypeScript compiler, tsc, was written in TypeScript itself, compiled to JavaScript, and run on top of Node.js. However, it’s shifting to a stand-alone native binary compiler using the Go language.

This move means a considerable acceleration in compiling the large projects that TypeScript was designed to support, such as the cloud-hosted Office applications and the Visual Studio Code IDE. Microsoft’s announcement blog post suggested that code could be compiled more than 10 times faster than before.

A new compiler for TypeScript

The numbers certainly look good: Visual Studio Code’s 1.5 million lines of code take nearly 78 seconds to compile using the current tool, but the new native compiler comes in at only 7.5 seconds, which is 10.4 times faster. That’s a significant improvement. It means the development team can start to think about treating the new compiler much like .NET’s Roslyn, adding the ability to use the compiler inside your development tools to dynamically debug code as you write it or to work more closely with GitHub’s Copilot and similar tools.

Speed is the key to these new capabilities; they need a native compiler rather than one running inside a JavaScript engine. Having the compute capabilities to dig down into your code as you write it will make TypeScript more accessible and improve both developer productivity and satisfaction. As we’re being encouraged to be more productive, getting the necessary compiler-level support to, say, refactor across the entire Visual Studio Code code base is essential.

Speeding up the TypeScript compiler by a factor of 10 will make it easier for an editor language server to provide the information needed to highlight code and track brackets and semicolons. At the same time, tools such as IntelliSense will be able to deliver more accurate code completions. Part of the work in this update adds support for the Language Server Protocol, which will allow more IDEs and programmer’s editors to add support for TypeScript.

Microsoft is noting significant improvements in memory usage. By removing the overhead that comes with running a compiler on a just-in-time platform, there’s already about 50% improvement, and that’s before the team adds its own optimizations.

Less memory overhead impacts tools and the environments to build TypeScript applications. With more and more organizations shifting to virtual environments such as GitHub Codespaces and Azure Dev Spaces, being able to keep resource requirements to a minimum, even for the most complex projects, helps manage costs and ensure that virtual environments load as quickly as possible.

Developing TypeScript-Go in public

These are early days for the new TypeScript compiler, and development is running in parallel with the next releases of the familiar JavaScript-based tool. The release notes show that even though much of what we need is ready, there’s still a lot missing. As a result, the next minor and major releases of TypeScript (5.9 and 6.x) will still be built on the current platform. However, as the new compiler adds enough features to support TypeScript 6.x code, it will be released as TypeScript 7.0. You can think of this as a similar migration as .NET’s away from the .NET Framework or PowerShell’s to being built on .NET; Microsoft will keep old and new TypeScript under development until it’s time for the new to leave the old behind.

There have been criticisms that it won’t be possible to run the new compiler as part of a web-based playground. But as the language’s designer, Anders Hejlsberg, notes in a GitHub comment, there is the possibility of using a WebAssembly version of the Go compiler to continue to support web-based development—and to offer many of the new compiler features to web code playgrounds.

Development is happening in the open on GitHub in a separate repository from the JavaScript release. For now, there isn’t a public binary release, but that doesn’t stop you from taking the current code base and compiling your own tools to give it a try. Microsoft provides instructions for building the Go-based TypeScript compiler, as well as the related Language Server with support for Visual Studio Code.

Build it yourself

Building the new TypeScript compiler and language server is relatively straightforward. You’ll need to install a few prerequisites before getting started: an up-to-date version of the Go compiler, as well as the Node.js JavaScript platform and the hereby task runner. If you’ve not used hereby before, it’s a Node.js-based tool for chaining tasks together and is used to orchestrate the TypeScript build process. Different runners are available for different outputs: one to build the compiler, one to build the tests, one to make sure the code is properly linted, and more.

Technically you don’t need to use hereby and can work with Go’s own tool, but using hereby does simplify things. One important point, if you haven’t installed hereby as a global tool, you can use npm’s npx command to launch and run the task runner. So, instead of using hereby build to run a build, simply use npx hereby build. This gives you the same output without the complexity of editing paths.

To get started, first clone the TypeScript-Go GitHub repository along with the submodules. This was an issue on a Windows system, as the path lengths for the submodules were too long. However, I was able to install them later using Git’s submodule update command. This allowed me to use npx hereby test to run TypeScript-Go’s tests once I’d completed a build.

With the prerequisites installed, building the Go-based TypeScript is straightforward. Running npx hereby build installed the experimental tsgo compiler in a built directory alongside the source code. Once the build was complete, I checked it by running the provided tests. Although there isn’t a complete set of tests yet, those that do exist give a good feel for how well the compiler performs. The current suite passed in just over 3 minutes 30 seconds on my test hardware.

A compiler plus a new language server

The new tools include a first pass at delivering a language server for the new compiler. It’s not hooked up to any new development tool yet, but you can install it in Visual Studio Code to see how well it performs at parsing changes to your TypeScript code by looking at its outputs in the Code debugger.

Microsoft provides installation instructions in Visual Studio Code, with the required JSON as part of the GitHub repository. I found this part of the documentation a little confusing and initially misread it as implying that the configuration JSON was generated as part of the build process. When I couldn’t find the necessary files in the built folders, I reread the documentation and realized they were cloned along with the source code for the compiler and language server.

Once I got over that hurdle, I could see language server output in Visual Studio Code, running in extension development mode. This approach will allow third-party extension developers to hook into the TypeScript-Go language server and build their own extension tools on top of the new compiler. What’s there is responsive, firing up events when you make changes to TypeScript in the Code editor window, with outputs shown in the Output pane, when selecting either typescript-go or typescript-go (LSP).

It’s good to see Microsoft investing in developer productivity in its compilers. The recent resurgence of .NET is as much due to its Roslyn compiler as to improvements in its languages and UI tooling, and it looks as though these TypeScript updates offer many of the same benefits. Having a fast compiler is key to getting value from modern development tools, as we can use them to deliver real-time error corrections and provide the deep understanding of compiled code necessary to get the most from techniques like refactoring.

With development in the open, we’re already seeing pull requests coming in from the community to improve performance in other areas of the platform. It’s an excellent sign to see outside involvement this early in a project of this scale, as it shows both community buy-in to the reengineering of TypeScript and the popularity and utility of the platform itself.

TypeScript has become an essential tool for Microsoft, as it powers both Visual Studio and the web versions of its Office suite. This new compiler will make it easier for Microsoft’s own development teams to deliver high-quality code, and we can all take advantage of it in our own applications—moving our own large JavaScript applications to a strongly typed, easier-to-manage language that shares much of the same syntax. Microsoft may be solving its own problems here, but when Redmond’s developers get to be more productive, so do we.

(image/jpeg; 5.75 MB)

How to implement idempotent APIs in ASP.NET Core 20 Mar 2025, 10:00 am

When designing your APIs, you should make them idempotent to ensure that they are robust, reliable, and fault-tolerant. An operation is idempotent when repeating it will always result in the same outcome. For example, an elevator button is idempotent. No matter how many times you press the same elevator button, the elevator will make one trip to the designated floor. Your APIs should work the same way.

In this article, we’ll examine how to build idempotent APIs in ASP.NET Core with relevant code examples to illustrate the concepts covered. To use the code examples provided in this article, you should have Visual Studio 2022 installed in your system. If you don’t already have a copy, you can download Visual Studio 2022 here.

Why do we need idempotent APIs?

Idempotent APIs ensure that duplicate requests will yield one and the same result. For example, the HTTP methods GET, HEAD, OPTIONS, and TRACE are idempotent because they do not modify the state of a resource in the server. Instead, they fetch the relevant resource metadata or its representation.

Let us understand the importance of API idempotency with an example. In a typical shopping cart application, a user often needs to make an API call to create a new order. If the API request is successful, the user should get a confirmation. However, a network issue might prevent the user from receiving confirmation, even if the user made the API request. In this case, the user might want to recreate the same order, i.e., retry the same API call, because they did not receive a confirmation after the previous request to create a new order.

What if the same order is created more than once? Clearly, we want to avoid duplicating orders. You should design your shopping cart API to be idempotent to avoid creating duplicate orders when a user makes multiple retries to the same API endpoint.

In general, you should make your APIs idempotent in order to prevent duplicate requests and retries from putting your application in an erroneous state. In other words, idempotent APIs help make your application more robust, reliable, and fault-tolerant.

Understand idempotency in HTTP methods

Note that the HTTP POST and HTTP PATCH methods are not idempotent. However, the HTTP methods GET, HEAD, PUT, and DELETE are idempotent by design.

  • HTTP GET: The HTTP GET operation is the most widely used HTTP method. HTTP GET retrieves data from the server but does not alter the state of the resource. Hence, the HTTP GET method call is idempotent.
  • HTTP HEAD: You use the HTTP HEAD method to retrieve the metadata of a resource. This method is typically used to determine if a resource is available on the server. If the resource exists, invoking the HTTP HEAD method will return the size and last modified date of the resource. In contrast to the HTTP GET method, the HTTP HEAD method does not return the message body as part of the response.
  • HTTP PUT: You use the HTTP PUT method to update existing data on the server. Remember that this operation will alter the state of a resource the first time only. Subsequent PUTs will not alter the state of the resource, but simply overwrite the state with the same input. For example, if you update a record in a database, the update should first check if the record exists. If it exists, the data stored in the database should be replaced by the new data. However, if you make multiple calls to the same API method with the same input, the data stored in the database will remain the same.
  • HTTP DELETE: You can delete a resource several times, but only the first deletion will change the state of the system. Because invoking one or multiple HTTP DELETE operations has the same result, the method is idempotent. If you delete a resource that does not exist, the method should return a message stating that the resource has already been deleted or not found.
  • HTTP POST: The HTTP POST method is used to send data to the server for processing. Because a POST operation can create a new resource on the server, it is never idempotent. Multiple POST calls can result in multiple new resources.
  • HTTP PATCH: The HTTP PATCH method is used to modify a resource on the server without altering the entire resource. This method is not idempotent because repeated HTTP PATCH requests can change the state of the resource repeatedly. For example, if you make a HTTP PATCH request to reduce the quantity of an item in inventory, the available stock of the item will be reduced with each repetition of the request.

Create an ASP.NET Core Web API project in Visual Studio 2022

To create an ASP.NET Core 9 Web API project in Visual Studio 2022, follow the steps outlined below.

  1. Launch the Visual Studio 2022 IDE.
  2. Click on “Create new project.”
  3. In the “Create new project” window, select “ASP.NET Core Web API” from the list of templates displayed.
  4. Click Next.
  5. In the “Configure your new project” window, specify the name and location for the new project. Optionally check the “Place solution and project in the same directory” check box, depending on your preferences.
  6. Click Next.
  7. In the “Additional Information” window shown next, select “.NET 9.0 (Standard Term Support)” as the framework version and ensure that the “Use controllers” box is checked. We will be using controllers in this project.
  8. Elsewhere in the “Additional Information” window, leave the “Authentication Type” set to “None” (the default) and make sure the check boxes “Enable Open API Support,” “Configure for HTTPS,” and “Enable Docker” remain unchecked. We won’t be using any of those features here.
  9. Click Create.

We’ll use this ASP.NET Core Web API project in the sections below.

Custom logic for idempotent APIs

In this section, we’ll examine how we can build an idempotent RESTful API in ASP.NET Core. For the sake of simplicity and brevity, we’ll only create one HTTP POST action method in our controller class and skip creating other action methods.

As we already know, HTTP POST methods are not idempotent by design because they are used to process data or create new resources. However, we can make them idempotent by writing custom logic. The following sequence of steps illustrates the logic:

  1. The client creates a unique key with each request and sends it to the server in a custom header.
  2. When the server receives the request, it checks whether the key is new or already exists.
  3. If the key is new, the server processes the request and saves the result.
  4. If the key already exists, the server returns the result of the stored operation without processing the request again.

In short, we assign a unique key to each request that allows the server to determine whether the request has already been processed. In this way, we ensure that each request is processed once and only once.

Now let’s get started with our implementation. For our example, we’ll use a simple shopping cart application.

Create the model classes

In the project we created earlier, create the following classes in the Models folder.


public class Product
{
   public int Product_Id { get; set; }
   public string Product_Code { get; set; }
   public string Product_Name { get; set; }
   public double Product_Price { get; set; }
}
public class Order
{
   public int Order_Id { get; set; }
   public List Products { get; set; }
}
public class KeyStore
{
   public string Key { get; set; }
   public DateTime Expiry { get; set; }
}

While the Product and Order classes are typically used in a ShoppingCart application, the KeyStore class is used here to store our idempotency keys. In this implementation, we’ll save these keys in the database (dbContext). Naturally, you could change the implementation to store the keys in the cache or any other data store.

Create the controller class

Right-click on the Controllers folder in the Solution Explorer Window and create an API controller called OrderController. Now, enter the following action method in the OrderController class. This method creates a new order.


[HttpPost]
public IActionResult CreateOrder([FromBody] Order order, [FromHeader(Name = "X-Idempotency_Key")] string key)
{
    if (string.IsNullOrEmpty(key))
    {
        return BadRequest("Idempotency key is required.");
    }
    if (_dbContext.KeyStore.FirstOrDefault(k => k.Key == key)!= null)
    {
        var existingItem = _dbContext.Orders.FirstOrDefault(o => o.Order_Id == order.Order_Id);
        return Conflict(new { message = "Request has already been processed.", item = existingItem });
    }
    _dbContext.KeyStore.Add(new KeyStore {Key = key, Expiry = DateTime.Now.AddDays(7)});
    _dbContext.Add(order);
    _dbContext.SaveChanges();
    return Ok(order.Order_Id);
}

Examine the code above. An idempotency key is generated at the client side and passed in the request header. This key will be used by the server to ensure that repeated calls to the same action method will not create duplicate records in the database. In other words, if the key is already present in the KeyStore, then the request for creation of the resource will be ignored. The presence of the key in the KeyStore means that the request was already processed earlier.

Takeaways

By embracing idempotency, you can build APIs that are robust, reliable, and fault-tolerant. Idempotent APIs are particularly important and beneficial in distributed systems, where network issues might lead to large numbers of retried requests from the client side. That said, you should always validate input data to ensure data consistency before storing data in the database.

(image/jpeg; 12.36 MB)

Cloud trends 2025: Repatriation and sustainability make their marks 19 Mar 2025, 3:30 pm

What are the top priorities and challenges related to the use of cloud computing? The Flexera 2025 State of the Cloud Report draws on the insights of 759 cloud decision-makers and users globally who took part in a survey in late 2024. The results illustrate the evolution of ongoing trends in past years, while simultaneously spotlighting the emergence of new forces driving cloud usage.

Workloads move back to data centers

A noteworthy shift of applications and data back from cloud to data centers—known as repatriation—is happening. Slightly more than one-fifth (21%) of workloads and data have been repatriated. However, ongoing migration to cloud and net-new cloud workloads outstrip these cloud exits, resulting in continued cloud growth.

Analysts and experts have, for some years now, indicated that organizations are moving cloud workloads back to their own data centers, often due to the inefficiencies and expenses that result from failing to refactor applications for cloud. Although net-new cloud workloads are still increasing, the frequency of repatriation is notable.

Sustainability gains ground

Cloud sustainability initiatives are becoming top of mind for many respondents. More than half (57%) of respondents either have or plan to have a defined sustainability initiative that includes carbon footprint tracking of cloud use within the next 12 months. With more than a third (36%) of all respondents already tracking their cloud carbon footprint, the need to do so has clearly been gaining traction.

Among European respondents, the number tracking their cloud carbon footprint rises to 43%. The gap between European respondents and respondents overall is closing; as an increasing number of global organizations adopt and adhere to important sustainability standards, this gap is expected to shrink even further.

Flexera - Sustainability of cloud use

Flexera

Generative AI is becoming mainstream

Not a surprise: Adoption of AI-related public cloud services is exploding. Almost half of respondents indicate that their organizations already use artificial intelligence/machine learning (AI/ML) platform-as-a-service (PaaS) services. This year’s survey also shows a surge in the use of data warehouse services, which are often used to feed AI models.

Generative AI use is also booming. Nearly three-quarters (72%) of organizations already use genAI either sparingly or extensively; another 26% are currently experimenting with genAI. Not only is genAI here to stay, but it’s becoming mainstream.

Flexera - Use of generative AI

Flexera

Cloud spend and security are the top challenges

Managing cloud spend is the top cloud challenge for organizations of all sizes, reported by 84% of respondents. As additional workloads find their way into the cloud, the need to manage and optimize the associated spend becomes paramount. Nearly nine out of 10 (87%) identify “cost efficiency/savings” as their top metric for assessing progress against cloud costs, making it the leading metric in this category, jumping from 65% a year ago. Similarly, “cost avoidance,” which can be achieved with proper license management, rose from 28% to 64% during the same period. As software-as-a-service (SaaS) usage increases, the focus on SaaS licensing is gaining increased attention, given the significant impact that SaaS expenses have on driving up cloud bills.

Following cloud spend as the top cloud challenge is security. Reported by 77%, security—always a top concern in the digital age—is the second-largest challenge for cloud initiatives. Among the tools used for managing multi-cloud initiatives, security tools take the number-one spot, with 55% of all respondents using them.

Public cloud adoption continues to accelerate

Public cloud spend continues to increase, with a third (33%) spending more than $12 million a year, up from 29% of respondents last year. Among enterprises (with more than 1,000 employees), the number spending this amount goes up to 40%. Even as cloud costs rise, more workloads are finding a home in the cloud. SaaS expenses remained fairly consistent year over year.

An area of hesitance is around sensitive data. Organizations remain cautious about moving sensitive data to the cloud, although more than a third indicate that all non-sensitive data will move to the cloud.

Flexera - Annual public cloud spend

Flexera

Centralized initiatives grow

The approach to governing and optimizing cloud and SaaS costs is shifting from vendor management teams towards cloud centers of excellence (CCOEs) and FinOps teams, representing a centralized approach to cloud. Today 69% of respondents have a CCOE or central cloud team.

Additionally, cloud cost optimization strategies are increasingly being handled by FinOps teams. Nearly three-fifths (59%) of respondents now indicate that they have a FinOps team for some or all of their cloud cost optimization strategies, up from 51% a year ago. As FinOps gains additional traction within the cloud community, particularly with SaaS and data centers now part of the FinOps scopes, reliance on FinOps teams across organizations is anticipated to rise.

Flexera - Top cloud challenges

Flexera

AWS and Azure compete for dominance

Year over year, this ongoing research shows that there has been little change among the leaders, with many organizations seemingly having found their steady state regarding the cloud—or mix of clouds—they’re using. Among all respondents, it boils down to a race that continues between Amazon Web Services (AWS) and Microsoft Azure as leading public cloud providers. A close contest in recent years, the two providers trade leads, based on the number of workloads running.

Flexera - YoY public cloud adoption

Flexera

Historically, enterprises are more likely to utilize Azure than are small- to medium-sized businesses (SMBs, with fewer than 1,000 employees). Today, among enterprises, AWS holds a slight lead (53%) over Azure (50%) among organizations that run “significant workloads,” while Azure (81%) has the lead over AWS (79%) when also including “some workloads.”

As part of cloud strategy, organizations continue to embrace multi-cloud: 70% of respondents embrace hybrid cloud strategies, using at least one public and one private cloud, while the remaining 30% use only public clouds or private clouds. Large enterprises (with more than 10,000 employees) make use of multi-cloud tools more than smaller organizations, regardless of the tool type.

Looking ahead

Growing cloud usage, initiatives to optimize costs, competition between the top cloud providers, and the ongoing use of AI all promise to be hallmarks of cloud programs in 2025. The new emphases on repatriation and sustainability will modulate how cloud initiatives are managed.

Brian Adler is senior director of cloud market strategy at Flexera. He was previously a senior director analyst at Gartner and a member of the FinOps Foundation governing board.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.

















































































































































What are the top priorities and challenges related to the
use of cloud computing? The
Flexera 2025
State of the Cloud Report
draws on the insights of 759 cloud
decision-makers and users globally who took part in a survey in late 2024. The
results illustrate the evolution of ongoing trends in past years, while
simultaneously spotlighting the emergence of new forces driving cloud usage.  
Repatriation highlights the move back to data centersA noteworthy shift of applications and data back from
cloud to data centers— known as repatriation—is happening. Slightly more than
one-fifth (21%) of workloads and data have been repatriated.
However, ongoing migration to cloud and net-new cloud workloads
outstrip these cloud exits, resulting in continued cloud growth.
 Analysts and experts have, for some years now, indicated
that organizations are moving cloud workloads back to their own data centers,
often due to the inefficiencies and expenses that result from failing to
refactor applications for cloud. Although net-new cloud workloads are still
increasing, the frequency of repatriation is notable.
 Sustainability
gains ground
Cloud sustainability
initiatives are becoming top of mind for many respondents. More than
half (57%) of respondents either have or plan to have a defined sustainability initiative
that includes carbon footprint tracking of cloud use within the next 12 months.
With more than a third (36%) of all respondents already tracking their cloud
carbon footprint, the need to do so has clearly been gaining traction.  Among European
respondents, the number tracking their cloud carbon footprint rises to 43%. The
gap between European respondents and respondents overall is closing; as an
increasing number of global organizations adopt and adhere to important sustainability
standards, this gap is expected to shrink even further.   
 



















 Ongoing trends continueIn
addition to the above standouts revealed in this year’s research, ongoing
trends continue to evolve in ways that impact day-to-day business management
for the organizations using cloud.
 Generative AI
is becoming mainstream:
Not a surprise: adoption of AI-related public cloud services is
exploding. Almost half of respondents indicate that their organizations already
use artificial intelligence/machine learning (AI/ML) platform-as-a-service (PaaS)
services. This year’s survey also shows a surge in the use of data warehouse
services, which are often used to feed AI models.
 Generative AI (Gen AI) use is also booming. Nearly three-quarters (72%)
of organizations already use GenAI either sparingly or extensively; another 26%
are currently experiment with GenAI. Not only is GenAI here to stay, but it’s
becoming mainstream, at least in some capacities.


 Cloud spend & security are the top challenges:Managing
cloud spend is the top cloud challenge for organizations of all sizes, reported
by 84% of respondents. As additional workloads find their way into the cloud,
the need to manage and optimize the associated spend becomes paramount. Nearly
9/10 (87%) identify “cost efficiency/savings” as their top metric for assessing
progress against cloud costs, making it the leading metric in this category,
jumping from 65% a year ago. Similarly, “cost avoidance,” which can be achieved
with proper license management, rose from 28% to 64% during the same period. As
software-as-a-service (SaaS) usage increases, the focus on SaaS licensing is
gaining increased attention, given the significant impact that SaaS expenses
have on driving up cloud bills.
 Following cloud spend as
the top cloud challenge is security. Reported by 77%, security—always a top
concern in the digital age—is the second-largest challenge for cloud initiatives.
Among the tools used for managing multi-cloud initiatives, security tools take
the #1 spot, with 55% of all respondents using them. 
Public cloud
adoption continues to accelerate:
Public cloud spend is increasing, with a third (33%) spending more than
$12 million a year, up from 29% of respondents last year. Among enterprises (with
more than 1,000 employees), the number spending this amount goes up to 40%. As
cloud costs rise, more workloads are moving to or born in the cloud. SaaS
expenses remained fairly consistent year over year.
 An
area of reticence is around sensitive data. Organizations remain cautious about
moving sensitive data to the cloud, although more than a third indicate that
all non-sensitive data will move to the cloud.
   

 Centralized
initiatives grow:
The approach to governing and optimizing cloud and SaaS costs is
shifting from vendor management
teams towards cloud centers of excellence (CCOEs) and FinOps teams,
representing a centralized approach to cloud. Today 69% of respondents have a
CCOE or central cloud team.
 Additionally, cloud cost
optimization strategies, in particular, are increasingly being handled by
FinOps teams. Nearly three-fifths (59%) of respondents now indicate that they
have a FinOps team for some or all of their cloud cost optimization strategies,
up from 51% a year ago. As FinOps gains additional traction within the cloud
community, particularly
with public cloud and SaaS now part of the
FinOps Scopes
, reliance on FinOps teams across organizations is
also anticipated to rise.
 

   Top vendors compete for dominance: Year over year, this ongoing research shows that
there has been little change among the leaders, with many organizations
seemingly having found their steady state regarding the cloud—or mix of clouds—they’re
using. Among all respondents, it boils down to
a race that continues between Amazon Web Services (AWS) and Microsoft Azure
as leading public cloud providers. A close contest in recent years, the two
providers trade leads, based on the number of workloads running.
  

 Historically,
enterprises are more likely to utilize Azure than are small- to medium-sized
businesses (SMBs, with fewer than 1,000 employees). Today, among enterprises, AWS
holds a slight lead (53%) over Azure (50%) among organizations that run
“significant workloads,” while Azure (81%) has the lead over AWS (79%) when
also including “some workloads.”
 As part of cloud strategy,
organizations continue to embrace multi-cloud: 70% of respondents embrace
hybrid cloud strategies, using at least one public and one private cloud, while
the remaining 30% use only public cloud(s) or private cloud(s). Large
enterprises (with more than 10,000 employees) make use of multi-cloud tools
more than smaller organizations, regardless of the tool type.  
Looking aheadGrowing cloud usage, initiatives to optimize costs, competition between
the top cloud providers, and the ongoing use of AI all promise to be hallmarks
of cloud programs in 2025. The new emphases on repatriation and sustainability
will modulate how cloud initiatives are managed.



 



 Brian Adler is senior director of cloud market strategy at Flexera
and was previously a senior director analyst at Gartner and a former member of
the FinOps Foundation governing board.

(image/jpeg; 0.32 MB)

GitHub suffers a cascading supply chain attack compromising CI/CD secrets 19 Mar 2025, 12:42 pm

A sophisticated cascading supply chain attack has compromised multiple GitHub Actions, exposing critical CI/CD secrets across tens of thousands of repositories. The attack, which originally targeted the widely used “tj-actions/changed-files” utility, is now believed to have originated from an earlier breach of the “reviewdog/action-setup@v1” GitHub Action, according to a report.

The initial compromise of tj-actions/changed-files, designated as CVE-2025-30066, was discovered last week when researchers found malicious code injected into the tool. The Cybersecurity and Infrastructure Security Agency (CISA) has officially acknowledged the issue, noting that “This supply chain compromise allows for information disclosure of secrets including, but not limited to, valid access keys, GitHub Personal Access Tokens (PATs), npm tokens, and private RSA keys.”

CISA confirmed the vulnerability has been patched in version 46.0.1.

Given that the utility is used by more than 23,000 GitHub repositories, the scale of potential impact has raised significant alarm throughout the developer community.

The attack chain revealed

Security researchers at Wiz have now identified what they believe to be the root cause of this high-profile breach. According to their analysis, attackers first compromised the v1 tag of the reviewdog/action-setup GitHub Action, injecting similar code designed to dump CI/CD secrets to log files.

Since tj-actions/eslint-changed-files utilizes this reviewdog component, the initial breach created a pathway for attackers to steal a personal access token (PAT) used by the tj-actions system.

“We believe that it is likely the compromise of reviewdog/action-setup is the root cause of the compromise of the tj-actions-bot PAT,” Wiz researchers explained in their report. The timing of both compromises aligns closely, strengthening the connection between these security incidents.

The attack methodology involved a particularly sophisticated approach. Attackers inserted a base64-encoded payload into an install script, causing secrets from affected CI workflows to be exposed in workflow logs.

In repositories with public logs, these exposed secrets would be readily available to malicious actors, creating a significant security vulnerability across the GitHub ecosystem.

Widening impact assessment

The tj-actions developers had previously reported they could not determine exactly how attackers gained access to their GitHub personal access token. This new finding from Wiz provides the missing link, suggesting that the initial reviewdog compromise was the first domino in this cascading attack chain.

Beyond the confirmed compromise of reviewdog/action-setup@v1, the investigation has revealed several other potentially impacted actions from the same developer. These include reviewdog/action-shellcheck, reviewdog/action-composite-template, reviewdog/action-staticcheck, reviewdog/action-ast-grep, and reviewdog/action-typos. The full extent of the compromise across these tools remains under investigation.

While GitHub and reviewdog maintainers have implemented fixes, Wiz warns that if any compromised actions remain in use, a repeat attack targeting “tj-actions/changed-files” could still occur — especially if exposed secrets are not rotated.

Response and remediation

The original tj-actions breach prompted GitHub to take swift action, pulling access to the compromised tool by March 16 and replacing it with a patched version (beyond 45.0.7). However, this new information about the cascading nature of the attack suggests that the security implications extend far beyond the initial assessment.

Industry experts are particularly concerned about the method of compromise within the Reviewdog project.

Wiz researchers noted that the project “maintains a large contributor base and accepts new members via automated invites,” potentially creating security weaknesses in their permission structure. This highlights how organizational practices can inadvertently create vulnerabilities that affect downstream dependencies.

For organizations potentially affected by this breach, security teams should immediately check for any references to reviewdog/action-setup@v1 in their repositories. The presence of double-encoded base64 payloads in workflow logs would confirm that secrets have been leaked.

“In such cases, all references to affected actions should be removed across branches, workflow logs should be deleted, and any potentially exposed credentials must be rotated immediately,” the report suggested.

Future prevention strategies

To mitigate similar risks in the future, security specialists are recommending several preventative measures. Rather than using version tags when implementing GitHub Actions, developers should pin their actions to specific commit hashes, which are immutable and cannot be modified after creation.

“Additionally, organizations should leverage GitHub’s allow-listing feature to restrict unauthorized actions from running in their environments,” Wiz suggested in its findings.

The incident underscores a growing trend of supply chain attacks targeting development tools and infrastructure. As organizations increasingly rely on third-party components and actions to streamline their development processes, the potential impact of such compromises continues to grow. A single breach in a widely used tool can quickly cascade across thousands of projects, highlighting the interconnected nature of the modern development ecosystem.

(image/jpeg; 8.59 MB)

SAP introduces Joule for Developers 19 Mar 2025, 11:17 am

SAP has added AI capabilities powered by its AI assistant, Joule, to SAP Build Process Automation and SAP Build apps, extending the existing AI capabilities in SAP Build Code and ABAP Cloud.

The announcement at this week’s SAPInsider event in Las Vegas, Nevada, “is designed to empower developers to build more efficiently, deliver precise, contextualized outcomes with purpose-built LLMs, and integrate new AI tooling for seamless development,” the company said in a release.

Joule for Developers is incorporated into SAP Build to assist developers with low code, pro code, and automation projects, but, Bharat Sandhu, SVP and chief marketing officer for SAP Business Technology Platform, emphasized, it is designed as a helper for developers, not a replacement.

It addresses two use cases for customers, he said: it makes developers more efficient by taking care of tedious work such as creating unit tests and generating test data, and it empowers new developers who might not be acquainted with business application development or SAP development.

The company said that Joule for Developers capabilities include:

  • Application creation: Generate code, UI, data models, and sample data across SAP programming models for Java, JavaScript, and ABAP.
  • Code optimization: Refactor code, create unit tests and generate code explanations, summarizations, and more with natural language queries and intuitive actions.
  • Process and workflow automation: Generate automation workflows and business rules using natural language queries.

It is powered by large language models (LLMs) tailored for SAP workloads, such as SAP’s ABAP, allowing it to do predictive code completion based on context, comments, and project heuristics, generate code explanations, assist in creating documentation, workflow development, and more.

“It leverages all the best practices and our SAP application programming models, which have been specially designed to extend and build around business applications,” Sandhu said, adding that a developer who has never built on SAP can give Joule for Developers a prompt and it will build the back end system, the front end UX, and the data model, allowing them to get started, “literally in minutes,” with a full application that they can customize. And if one of the more than 400 prebuilt line-of-business applications matches the functionality requested by the developer, Joule will recommend it.

He also pointed out that before the AI passes its output to the user, it runs it through internal checks to verify its accuracy and reduce the chance of hallucinations.

Joule for Developers differs from other AI coding assistants, noted Arnal Dayaratna, research vice president, of software development at IDC, in that “its deep specialization in ABAP that is attributable to SAP’s enhanced access to ABAP-specific training data.” Its integration with ABAP and SAP Build, he said, gives it “a unique capability” to support both pro code and no code developers.

He said, “These capabilities render it especially important for the SAP developer community and its associated ecosystem of ISVs.”

Jason Andersen, VP and principal analyst, Moor Insights & Strategy, agreed.

“Overall, it’s great news for the SAP developer community to have an AI assistant customized to their needs,” he said. “The key to this is training the model the assistant will leverage to specific SAP capabilities such as SAP Workflows and ABAP, since SAP has the knowledge to train a more precise AI assistant than a general-purpose coding model like that you would see from a cloud provider or an AI model. This is similar to what we are seeing from SaaS vendors who want something that will produce the best result for their ecosystem.”

He added, “What I find most refreshing about a solution like this is helping existing non-SAP developers with the onboarding process. Maybe it’s a new hire or a transfer from a different team. They will become more productive more quickly by using a tool like this.”

However, said Scott Bickley, an advisory fellow at Info-Tech Research Group, “Most enterprises are not going to invest in multiple AI platforms, so the race is on for which solution can do the most to bring them towards their goals. It is unrealistic to expect companies to invest in AI solutions from Salesforce, ServiceNow, SAP, and others. SAP has an advantage in that ERP solutions are the system of record and house a lot of critical data. SAP is banking on this fact to make it the system of choice and to use SAP Business Data Cloud to integrate non-SAP data sources into its AI ecosystem.”

And, he cautioned, “Prospective buyers should be cautious, however, as it is early days for all of these solutions. Nothing is proven at this point. Running proof of concept exercises while avoiding major financial commitments is critical at this stage. SAP’s solution is part of the BTP suite, so this is a consumption-based license model. This requires more investment on the front end of the evaluation process to ensure use cases are rock solid and consumption is predictable; if not, CFOs could be surprised by massive unforecasted invoices.”

For the time being, however, those invoices will not be a worry. Sandhu said that Joule for Developers is free, but pricing will be disclosed in a few months. “Right now, our mission is to get it in the hands of as many people as possible, get good usage, get good feedback from customers, and then we’ll figure out how to price for it afterward,” he said.

(image/jpeg; 1.6 MB)

Astro with HTMX: Server-side rendering made easy 19 Mar 2025, 10:00 am

Astro.js is a well-thought-out and capable full-stack JavaScript platform that provides flexible technology choices on both the front and back end. It’s no surprise it currently has 50,000 stars on GitHub. Astro provides a structure to start with while also letting you explore a wide range of options for the technologies you’ll use.

My previous article introduced the basics of dynamic web application development with Astro. Now, we’ll go deeper with a look at the code and build process for the to-do demo application.

Server-side rendering with Astro and HTMX

Although Astro is best known as a server-side rendering (SSR) meta-framework that supports reactive front ends like React and Vue as plugins, it has evolved into an impressive back-end solution in its own right, with endpoints and routing to handle almost anything you throw at it.

In this demo, we’re going to put Astro through its server-side paces by using it to host HTMX views. What’s challenging (and interesting) about this approach is that we’ll be sending view fragments in our responses. That will require some finagling, as you’ll see.

To start, we can launch a new application with the standard Astro command-line tool (see the Astro.js documentation if you need information about installing and using the Astro CLI):


$ npm create astro@latest

We’ll be using dynamic endpoints, so Astro will need to know what kind of deployment adapter to use (see my previous article for a discussion of adapters). In our case, we’ll use the adapter for a Node integration:


$ npx astro add node

Services

Let’s start building our custom code at the service layer. The service layer gives us a central place to put all our middleware that can be reused across the app. In a real application, the service layer would interact with a data store via a data layer, but for our exploration we can just us in-memory data.

The convention in Astro seems to be to use a /lib directory for these types of things. All the code for Astro goes in the /src directory, so we’ll have our service code at src/lib/todo.js:


src/lib/todo.js:

// src/lib/todo.js
let todosData = [
    { id: 1, text: "Learn Kung Fu", completed: false },
    { id: 2, text: "Watch Westminster", completed: true },
    { id: 3, text: "Study Vedanta", completed: false },
];

export async function getTodos() {
    return new Promise((resolve) => {
        setTimeout(() => {
            resolve(todosData);
        }, 100); // Simulate a slight delay for fetching
    });
}

export async function deleteTodo(id) {
    return new Promise((resolve) => {
        setTimeout(() => {
            todosData = todosData.filter(todo => todo.id !== id);            resolve(true); 
        }, 100);
    });
}
export async function addTodo(text) { 
  return new Promise((resolve) => {
    setTimeout(() => {
      const newTodo = { id: todosData.length+1, text, completed: false };             todosData.push(newTodo);               
 resolve(newTodo); 
    }, 100);
  });
}

One thing to notice in general is that all our functions return promises. This is something Astro supports out of the box and is excellent for these service methods when they need to talk to a datastore, to avoid blocking on those network calls. In our case, we simulate a network delay with a timeout.

Otherwise, these are vanilla JavaScript calls that use some simple functional operations to perform the work we need on the in-memory array of todosData.

That’s all we need for the service layer.

The main view

Now let’s consider src/pages/index.astro. The astro create command put a simple welcome page in there, which we can repurpose. To start, we delete all the references to the Welcome component. Instead, we’ll use our own TodoList component, which we’ll build in a moment:


----
// src/pages/index.astro
import Layout from '../layouts/Layout.astro';
import TodoList from '../components/TodoList.astro';
---

  


In Astro components (defined inside .astro files) we have two segments: the JavaScript inside the “code braces” (----) and the HTML-based template. This is similar to other templating technologies. But Astro is somewhat unique because by default everything is run on the server and packaged into an HTML bundle with minimal JavaScript, which is then sent to the client.

Reusable components

Now let’s take a look at the TodoList component. The components directory holds all our reusable .astro components, so TodoList is found in src/components/TodoList.astro:


---
// src/components/TodoList.astro
import { getTodos, deleteTodo } from '../lib/todo';
import TodoItem from './TodoItem.astro'; 

const todos = await getTodos();
---
    {todos.map(todo => ( ))}

As a side note, I’ve included a snippet of styles. Astro makes it easy to include component-scoped CSS as we’ve done here. I won’t discuss CSS much here because we are focused on the structure and logic of the Astro application.

TodoList imports the necessary functions from the service module we just saw, and uses them to render a view. First it uses await to grab the to-do’s, then in the template it loops over them with todos.map. For each Todo, we use the TodoItem component and pass in a property holding the to-do data. We’ll take a look at the TodoItem item shortly.

There’s also a form that is used to create new to-do’s. This uses HTMX to submit the form with background AJAX and avoid a page reload:

  • hx-post: Tells it to submit a POST request to /api/todos.
  • hx-target: Indicates where to put the response (into the to-do list element).
  • hx-swap: Fine-tunes how to add the new element (at the end of the list).

This whole part of the UI will be rendered ahead of time on the server and sent pre-packaged to the browser.

TodoItem component

Before looking at the API that will field the create requests, let’s look at the TodoItem component, at src/components/TodoItem.astro:


---
// src/components/TodoItem.astro
import { deleteTodo, getTodos } from '../services/todo';

export interface Props {
  todo: { id: number; text: string; completed: boolean };
}
const { todo } = Astro.props;
---
  • {todo.text} {todo.completed ? ' (Completed)' : ''}
  • The TodoItem accepts the properties we saw earlier. (See the Astro documentation to learn more about Astro properties, or props, including TypeScript interfaces.) Using the props we create a simple list item for the todo and a Delete button that also uses some HTMX to handle deletion via AJAX. The HTMX here uses hx-delete, which submits a DELETE request. The hx-target and hx-swap attributes show off some of the power of HTMX in these simple properties, allowing us to target the list item itself for deletion. (The hx-delete by default removes its target element upon receiving an HTTP success response.)

    Delete endpoint

    Next, let’s look at how we can field the DELETE request. Astro’s file-based routing lets us define our server endpoints inside /pages, alongside the views, using the same routing semantics. We’ll create an /api subdirectory to help organize things, and we are using a route parameter to capture the todo ID that was submitted for deletion, giving us the following file: src/pages/api/todos/\[id\].js:

    
    import { deleteTodo, getTodos } from '../../../lib/todo';
    
    export const prerender = false;
    
    export async function DELETE({ params, request }) {
        const id = parseInt(params.id, 10); 
        if (isNaN(id)) {
            return new Response(null, { status: 400, statusText: 'Invalid ID' }); 
        }
    
        await deleteTodo(id); 
    
        return new Response(null, { status: 200 }); // Empty response is sufficient for delete
    }
    
    

    Astro endpoints are just like views except without the template part. One important general note is that we use the prerender = false flag to ensure the engine doesn’t try to create this endpoint when building. We want a dynamic endpoint. (The Astro docs refer to getStaticPaths() as a way to fine-tune which endpoint functions are dynamic. In our case, prerender lets us indicate everything is dynamic.)

    This endpoint is denoted as a DELETE and uses the service function to do the work on the data. It sends an empty Response object back with a 200 success code. When the HTMX gets that on the front end, it’ll remove the item from the view.

    Todo creation endpoint

    The last major piece of the puzzle is handling the todo creation endpoint. We need to accept the text of the new todo, add it to the list, and then send back the markup to be inserted into the list.

    This would in most cases be done with another server endpoint. However, Astro is still working on the ability to render components programmatically. (You can follow the work in the Container API render components in isolation roadmap and proposal.)

    For now, we can use a fairly painless workaround, and use a page view to handle the request and reuse the TodoItem.astro component to send a response fragment. This keeps our code DRY.

    Here’s what our pseudo-endpoint looks like at src/pages/api/todos/index.astro:

    
    ---
    import {addTodo} from '../../../lib/todo.js';
    import TodoItem from '../../../components/TodoItem.astro';
    
    let newTodo = null;
    if (Astro.request.method === 'POST') {
      const formData = await Astro.request.formData();
      newTodo = await addTodo(formData.get('text'));
    }
    export const prerender = false;
    ---
    
    
    

    You’ll notice the JavaScript in the component front matter has full access to the request, so we have no problem filtering according to the method type. In general, the JavaScript is a full-blown server-side function. Also, notice that we again indicate a dynamic component with prerender = false.

    The main work is in turning the request form body into a new to-do item using the create function from our service utility. Then we use that as the prop on the TodoItem component. The net result is the response will be what is rendered by TodoItem.astro, using the new item data.

    Run it

    To run the app in dev mode, enter:

    
    $ npx astro dev
    
    

    To create a production build (output to /dist), enter:

    
    $ npx astro build
    
    

    Impressions of the Astro.js development experience

    Astro is nice to work with. Its dev mode is fast and pretty stable, and it is very good at only reloading chunks that have been modified. It also presents helpful errors in the browser, with hotlinks to relevant docs like this one:

    Screenshot of the AstroHTMX demo app.

    Matthew Tyson

    This type of error reporting demonstrates a high degree of care about developer experience.

    We did hit a kind of edge case with that todo creation endpoint but the workaround wasn’t too painful. Moreover, the Astro.js project is hot on the trail of creating an official solution.

    It’s tough to compete in this field with so many excellent and established frameworks, but it’s safe to say Astro is doing it.

    See my GitHub repository for all the demo code from this article.

    (image/jpeg; 4.2 MB)

    You can build it on a Chromebook 19 Mar 2025, 10:00 am

    I was a Windows guy from the very beginning. I used Windows 1.02 on my IBM PS/2 Model 25 back in the late 1980s. I was thrilled to build applications for Windows 3.1 using Turbo Pascal for Windows. I remember how fun it was to allocate a whopping 2GB of RAM for an array. I stuck with Windows 2000 for many years, then became a Windows 7 die-hard. (Let’s not talk about Windows 8, okay?) Today I use Windows 11 all day in my work. Windows has been with me since the beginning.

    But a couple of years ago, I realized a few things. First, I realized that I had gotten to the point where I spent 98% of my time in the Chrome browser and seldom ran Windows applications. The only Windows application I used frequently was Visual Studio Code.

    After a while, I realized that Chromebooks were really Linux under the hood and I could run VS Code natively on one, so I made the switch. To my surprise, it ran just as smoothly as it did on Windows, with all the extensions I needed. The only real limitation was system memory. On an 8GB Chromebook, heavier workloads could slow things down.

    A few adjustments

    There were a few adjustments. I was pleased to find that ChromeOS provided many standard utilities, like a calculator and a text editor. I also found browser-based alternatives for a few more complex Windows utilities, like Chrome Remote Desktop. Finally, I figured out how to run Postman on ChromeOS’s Linux Development Environment alongside VS Code. Then I was pretty much set.

    As a Google Pixel phone user, I was already deeply immersed in the Google Universe (and quite uninterested in the world of Apple), so switching to ChromeOS was a natural migration for me.

    One of the main draws to ChromeOS was its simplicity. As the browser has become the center of the computing universe, there is little that I need to do that can’t be done in Chrome. I’ve never been a big Microsoft Office user, so the Google Office suite was more than enough. I’m not much of a gamer, but even browser-based gaming is coming along. Before long, I’ll be able to do all my development inside the browser.

    A Chromebook boots in seconds, and updates itself in the background. No more long, arduous, fraught-with-peril updates done at inconvenient times. The Blue Screen of Death is a thing of the past. There is no bloatware to delete and no viruses to worry about, and thus no heavy virus-scanning software is necessary.

    Smooth sailing

    The hardware is generally inexpensive but pretty robust and incredibly easy to set up. However, most Chromebooks come with just 8GB of RAM, which, as I mentioned above, gets pushed to the limit when running VS Code in Linux. Finding a Chromebook with 16GB of RAM can be a challenge and disproportionately expensive, making the point less compelling, I suppose.

    Of course, while I don’t want it to happen, I don’t worry about my Chromebook being lost or run over by a bus because it is quite easy to replace. The time from unboxing to being back in business is minutes, not hours like it would be with a Windows machine. While a couple hundred bucks is nothing to sneeze at, knowing that I can replace a missing or out-of-commission machine easily and quickly is nice indeed. (The downside here, though, is that the temptation to get a newer, faster, better machine is harder to resist.)

    It wasn’t without trepidation that I made the switch, but it’s been two years of quite smooth sailing. I’m not a Linux genius by any stretch of the imagination, but even installing VS Code was nothing more than double-clicking on a *.deb file. Painless. ChromeOS even created an icon to run VS Code in the start menu. 

    Fast, easy, simple, and cheap. If only everything in tech worked that way.

    (image/jpeg; 0.22 MB)

    Oracle reveals five new features coming to Java 18 Mar 2025, 10:36 pm

    With JDK (Java Development Kit) 24 having just reached general availability, Oracle has given a sneak peek at Java features set to arrive in the not-too-distant future, ranging from enhanced primitive boxing to null-restricted value class types.

    Oracle on March 18 cited five features that were being prepared for an upcoming Java release, including stable values, an API that has been officially targeted for the JDK 25 release due this September. The other two features cited include value classes and objects and derived record creation. JDK Enhancement Proposals (JEPs) have been published for all five features, which are now in a preview stage:

    • Enhanced primitive boxing uses boxing to support language enhancements that treat primitive types more like reference types. Goals include allowing boxing of primitive values when they are used as the “receiver” of a field access, method invocation, or method reference, and allowing unboxed return types when overriding a method with a reference-typed return. Also, primitive types would be supported as type arguments.
    • Null-restricted value class types allow the type of a variable that stores value objects to exclude null, enabling more compact storage and other optimizations at run time. Null-restricted value class types are being previewed as both a language feature and a virtual machine feature.
    • Value classes and objects enhance the Java platform with value objects, which are class instances that have only final fields and lack object identity. Goals include allowing developers to opt in to a programming model for simple values, in which objects are distinguished solely by their field values. The proposal also would maximize the freedom of the JVM to encode simple values in ways that improve memory footprint, locality, and garbage collection efficiency.
    • Derived record creation enhances the language with the ability to create a new record from an existing one. One goal is providing a concise means to create new record values derived from existing record values. Another goal is streamlining the declaration of record classes by eliminating the need to provide explicit wither methods, which are the immutable analogue of setter methods.
    • Stable values are objects that hold immutable data. Because stable values are treated as constants by the JVM, they allow for the same performance optimizations that are enabled by declaring a field final. At the same time, they offer greater flexibility as to the timing of initialization. Goals of the proposal include improving the startup of Java applications by breaking up the monolithic initialization of application state.

    (image/jpeg; 8.54 MB)

    Oracle, Nvidia partner to add AI software into OCI services 18 Mar 2025, 9:00 pm

    Oracle and Nvidia are partnering to make the latter’s AI Enterprise software stack available via Oracle Cloud Infrastructure (OCI) services.

    Nvidia’s AI Enterprise stack will be available natively through the OCI Console and will be available anywhere in OCI’s distributed cloud while providing enterprises access to over 160 AI tools for training and inference, including NIM microservices, the companies said in a joint statement at Nvidia’s annual GTC conference.

    The AI-focused software stack will enable enterprises to combine them with OCI services for building applications and managing data across deployments, they added.

    Analysts believe that the integration of Nvidia’s AI software stack into OCI will provide developers and enterprises with a variety of advantages.

    “Developers stand to be able to seamlessly leverage technologies such as Nvidia NeMo, NIM, and RAPIDS, all of which are part of Nvidia AI Enterprise stack. They can use NeMo to build, train, and fine-tune large language models, or use NIM to facilitate the deployment of AI models as microservices,” said Arnal Dayaratna, research vice president at IDC.

    Moor Insights and Strategy principal analyst Jason Andersen believes that the integration will drive a better user experience, which is important for users working in DevOps, as well as provide consistency in terms of support and billing.

    Further, Charlie Dai, vice president and principal analyst at Forrester, said that the integration will drive efficiency for developers and enterprises by reducing setup time and simplifying access to advanced AI tools.

    As an extension to Oracle’s strategy to help enterprise users deploy faster with minimal setup, the cloud services provider has integrated Nvidia’s NIM microservices into OCI Data Science.

    “Data scientists can access preoptimized microservices from Nvidia NIM directly in OCI Data Science to support real-time AI inference use cases without the complexity of managing infrastructure,” Oracle said in a statement.

    Explaining how NIM would reduce complexity for enterprise users, Dai said that NIM provides containers to GPU-accelerated inferencing microservices for pre-trained, customized AI models and the integration will aid data by simplifying AI model deployment, enhancing scalability, and accelerating time-to-insight.

    NIM’s pre-built container instances and standard APIs are also expected to simplify hosting and accelerate integration with upstream and downstream applications, said Brian Alletto, director of West Monroe’s technology and experience practice.

    What will Oracle and Nvidia gain out of the partnership?

    The partnership, according to analysts, is a symbiotic one, to say the least.

    While making software such as the AI Enterprise stack and NIM available through its services increases customer adoption of Oracle’s core compute, and storage offerings, the partnership provides Nvidia first-class citizenship to a major cloud provider, driving additional usage of GPUs and related software, resulting in more stickiness, Alletto explained.

    Additionally, Moor Insights and Strategy’s Andersen pointed out that the integration may boost Oracle’s image as a provider of AI offerings.

    “While Oracle has an excellent history with enterprise data, it is not as well known for its AI tools or model capabilities, so this is a way for Oracle to promote a solution that represents the most credible authorities in both data and AI,” Andersen said.

    “It’s a good combo versus some of Oracle’s competitors, such as AWS, Microsoft, and Google Cloud, who have built a lot more of their own capabilities,” the analyst added.

    On the flip side, Andersen believes that the partnership will make Nvidia’s software stack more credible to other cloud service providers since they have their own tooling and capabilities and don’t need the chipmaker’s software services.

    Oracle adds AI Blueprints to OCI

    In order to further help its enterprise customers simplify and accelerate their AI deployments, Oracle is adding AI Blueprints — no code deployment recipes — to OCI.

    OCI AI Blueprints supports automatic scaling (autoscaling) of inference workloads to handle varying traffic loads efficiently, Oracle said.

    “When demand increases, OCI AI Blueprints can spin up more pods (containers running inference jobs) and, if needed, provision additional GPU nodes. When demand decreases, it scales back down to save resources and cost,” it explained. However, West Monroe’s Alletto pointed out that while tools such as AI Blueprints enable rapid deployment and can be optimized later as product use cases, balancing the offering velocity and cost remains a challenge.

    (image/jpeg; 10.17 MB)

    Page processed in 1.771 seconds.

    Powered by SimplePie 1.3.1, Build 20121030095402. Run the SimplePie Compatibility Test. SimplePie is © 2004–2025, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.