TUF – The Update Framework

The Update Framework (TUF) is a tool that helps programmers make sure that software update systems are secure. Even in the event that the repository or signing keys are compromised, it provides defence against attacks. Developers can include TUF’s flexible framework and specification into any software updating system.

The Linux Foundation, which incorporates TUF as a part of the Cloud Native Computing Foundation (CNCF), is in charge of TUF. TUF is a graduated project, now used in production scenarios by many open-source and technology organisations. Additionally, the car sector makes heavy use of a TUF variant known as Uptane to secure over-the-air upgrades.

Why use TUF?

There are thousands of different software update systems in use right now. It’s important to remember that even a typical Windows user’s PC probably has twenty or more various software updaters on it. Despite their differences, these systems have one thing in common: they both seek for and download updates that bring new features or fix security flaws.

Software is a living thing; some repositories get updates on project or software metadata every few minutes. The necessity for reliable protection for the systems in charge of managing updates has grown as their volume keeps increasing.

While numerous tactics have been used over the past ten years to improve the validity of update files and strengthen the security of update systems, many of them have shown to be weak points in the event of an attack.
TUF was established about ten years ago as a way to protect systems from key compromises and other threats that could corrupt a repository or spread malware. The following are the main goals that guided its design:

  1. Creating a framework including libraries, file types, and tools that can be used to protect both brand-new and existent software update systems.
  2. Providing tools to reduce the effects of key compromises.
  3. Making sure there is flexibility to meet the various demands of different software update systems.
  4. Facilitating easy integration with existing software update systems.

Software Updates

An application that runs on a client system and is in charge of locating, acquiring, and installing software updates is referred to as a software update system. Software update systems fall into three main categories:

  • Application updaters. Built-in components that give an application the ability to update itself on its own. For instance, Firefox updates itself using an application updater.
  • Package managers. For installing additional libraries or dependencies, available for several programming languages. Examples include Ruby’s RubyGems, Python’s pip/easy_install combined with PyPI, Perl’s CPAN, and PHP’s Composer.
  • System package managers. Used by operating systems to control software updates and installations on client computers. APT from Debian, YUM from Red Hat, and YaST from openSUSE are a few examples.

The majority of software update systems generally follow a similar update procedure, which can be summed up as follows, even though the specific techniques may vary.

  • Update detection: The system is made to be aware when updates are available. This can entail routine inspections or alerts from a central server or repository.
  • Download of updates: When an update is found, the system starts downloading the required files for the update. Obtaining files from a distant server or repository may be necessary.
  • Update Installation: The system makes the changes brought about by the update after downloading the update files. This often entails replacing or changing current configurations or files to comply with the update’s requirements.

How does TUF secures the updates?

By including verifiable information about the status of a repository or application, TUF enhances the security of software upgrades. This is accomplished by providing metadata that contains key trust information, file cryptographic hashes, metadata signatures, metadata version numbers, and metadata expiration dates. The purpose of this metadata is to act as a record that can be checked to determine the validity of update files.

The beauty of TUF is that it may protect your software update system from the difficulties of handling this additional metadata or understanding the underlying procedures. TUF is in charge of locating updates, downloading them, and comparing them to the repository metadata that was also downloaded. TUF securely sends the downloaded update files to your software update system for installation if they pass the verification process.

Getting started with Open Cloud Technologies

Best practices cases webinar by OpenCloudification for use of cloud and open cloud technologies for technology and manufacturing companies. Part of the OpenCloudification.com series supporting ICT and manufacturing companies embracing cloud, open cloud technologies and digitally transforming. Some of the following topics were handled : why would a technology company or manufacturing company consider use of cloud and open cloud, technology and manufacturing use cases explained, the basic business cases for technology and manufacturing companies, make your own, some first steps, what to prepare for cloudification and digitization, …

CyberSecurity Monitoring

Chronicle is Google’s cloud-native security operations suite, with a focus on evaluating Security Incidents and Events (SIEM) features and usability from a practitioner perspective.

Derived from log collection and analysis, the SIEM promised to support more advanced alerting by correlating logs from multiple sources, allowing them to be used to generate alerts or even automagically eliminate a false positive detection. Additionally, SIEM allowed organisations to support threat hunting and more comprehensive security investigations.

Cloud-native SIEMs emerged to address the CAPEX problem and challenges with scale/elasticity. Now organizations could largely pay by the volume of logs ingested, allowing the cloud-native SIEM provider to deal with the backend issues of hardware.

Google’s cloud-native SIEM Chronicle is designed from the ground up to address shortcomings found in other SIEMs.

The entire design of Chronicle SIEM focuses on customer outcomes. There are four pillars of security that Chronicle addresses:

  1. Provide complete visibility into the security environment.
  2. Enrich data in the SIEM with Google’s threat intelligence and external sources, enabling security analysts to rapidly operationalize it.
  3. Apply modern threat detection to data ingested into the SIEM, without relying on customers to have dedicated security engineering resources on staff.
  4. Facilitate seamless response to accelerate the investigation by integrating with SOAR platforms (including Chronicle SOAR, formerly Siemplify).

For more information on SIEm, Chronicle SIEM and Chronicle – Google Security, please visit:

An in depth SANS article, guide you through the Chronicle SIEM vs others.

Chronicle Security – SIEM on Github : https://github.com/chronicle

Chronicle Security – Chronicle SIEM on Google : https://cloud.google.com/solutions/security-information-event-management


A database is a system for storing and organizing data that can be easily accessed, updated, and managed. It is commonly used to store large amounts of data and can be used for a wide range of purposes, such as storing customer and product information for an e-commerce website, tracking financial transactions for a bank, recording patient records for a healthcare organization, or managing inventory and orders for a manufacturer, for example. There are several types of databases, including relational databases, which use a tabular structure, and NoSQL databases, which store data in various formats such as documents, key-value pairs, or graph structures. Databases can be accessed and managed through specialized software tools such as database management systems and the Structured Query Language (SQL), a programming language used to manage and manipulate data stored in relational databases. It is a standard language for interacting with relational databases and is widely used in a variety of industries and applications.

Types of Databases

There are several types of databases that are commonly used in cloud computing environments depending on multiple factors, like needs and use cases. A non-exhaustive list is the following:

  • Relational databases. These are traditional databases that store data in a tabular format, with rows representing individual records and columns representing the attributes of those records. Examples include MySQL, Oracle, and Microsoft SQL Server.
  • NoSQL databases. These databases are designed to handle large amounts of data either structured, semi-structured, or unstructured. They do not use the tabular format of relational databases and can store data in a variety of formats such as key-value pairs, documents, or graph structures. Examples include MongoDB, Cassandra, and Amazon DynamoDB.
  • In-memory databases. These databases store data in memory rather than on disk, which allows them to provide extremely fast access to data. They are often used for real-time analytics or other applications that require low latency. Examples include Redis and Memcached.
  • Graph databases. These databases are designed to store and manage data that is structured in the form of a graph, with nodes representing entities and edges representing relationships between those entities. They are often used for applications such as recommendation engines and social networks. Examples include Neo4j and Amazon Neptune.
  • Time-series databases. These databases are designed specifically for storing and querying time-series data, which is data that is collected and recorded over time. They are often used for applications such as monitoring and analytics. Examples include InfluxDB, Prometheus, and TimescaleDB.
  • Key-value stores. These databases are designed to store and retrieve data using a simple key-value pair and are often used for applications such as caching and storing simple data structures. Examples include Redis and Amazon DynamoDB.
  • Search engines. These databases are optimized for fast search and retrieval of data and are often used for applications such as e-commerce websites and online search engines. Examples include Elasticsearch and Apache Solr.
  • Streaming databases. These databases are designed to handle high-velocity streams of data and are often used for applications such as real-time analytics and event processing. Examples include Apache Kafka and Amazon Kinesis.
Types of Databases. Source: Xenonstack blog, different types of databases

Deploying your Database in the Cloud

Cloud-based databases are databases that are hosted and managed by a cloud provider, rather than being installed and managed locally on on-premises hardware. Several types of cloud-based databases are available:

  • Fully managed: fully managed by a cloud provider, which means that the provider takes care of all aspects of the database, including setup, configuration, security, monitoring, and maintenance.
  • Database as a service (DBaaS): offered as a service, which means that the user pays a subscription fee to access and use the database. The provider takes care of all aspects of the database, including setup, configuration, security, monitoring, and maintenance.
  • Virtual machine-based: deployed on a virtual machine in the cloud. The user is responsible for setting up and configuring the database and may also be responsible for tasks such as security, monitoring, and maintenance.
  • Container-based: deployed in a container in the cloud. Like the VM-based, the user is responsible for setting up and configuring the database, as well as tasks such as security, monitoring, and maintenance.
  • Hybrid: combine elements of different database types, such as relational, NoSQL, and in-memory technologies, in order to provide a flexible and scalable solution for storing and managing data.
  • Serverless: fully managed by a cloud provider and designed to automatically scale up or down based on demand. They do not require the user to provision or manage any infrastructure and are often billed based on usage.
  • Distributed: designed to scale horizontally across multiple servers and used for applications that require high availability and horizontal scalability.
  • NewSQL databases: designed to provide the scalability and performance of NoSQL databases, while still maintaining the ACID (atomicity, consistency, isolation, durability) properties of traditional relational databases.
  • Multimodel: support multiple data models, such as document, graph, and key-value, within a single database system. They are often used for applications that require a flexible and versatile solution for storing and managing data.

Benefits of using Cloud-based Databases

Some common benefits of using cloud-based databases include:

  • Scalability. Cloud-based databases can easily scale up or down to meet changing demand, without the need to purchase and install additional hardware.
  • Elasticity. Cloud-based databases can automatically adjust their capacity to match the workload, which can help to reduce costs.
  • High-availability. Cloud-based databases typically offer high-availability options, such as automated failover and backup, to ensure that the database is always available.
  • Easy maintenance. Cloud-based databases are typically managed by the cloud provider, which means that the user does not have to worry about tasks such as software updates, backups, and monitoring.
  • Reduced upfront costs. Cloud-based databases do not require the user to purchase and maintain expensive hardware, which can reduce upfront costs and help to lower the total cost of ownership.
  • Flexibility. Cloud-based databases offer a variety of deployment options, including fully managed, database as a service (DBaaS), and virtual machine or container-based deployments, which allow users to choose the option that best fits their needs.
  • Integration with other cloud services. Cloud-based databases can be easily integrated with other cloud-based services, such as analytics and machine learning, which can help users to extract more value from their data.

Challenges of using Cloud-based Databases

There are also some potential downsides to using cloud-based databases, including:

  • Dependence on internet connectivity. Cloud-based databases require an internet connection to access and manage data, which can be a problem in areas with unreliable or limited internet connectivity.
  • Security concerns. Some users may be concerned about the security of their data when it is stored in the cloud. While cloud providers typically have robust security measures in place, there is always a risk of data breaches or unauthorized access.
  • Limited control. Because cloud-based databases are managed by the cloud provider, users have less control over certain aspects of the database, such as configuration and maintenance.
  • Vendor lock-in. Users of cloud-based databases may be more reliant on the cloud provider and may face challenges if they want to switch to a different provider.
  • Cost. While cloud-based databases can be cost-effective in many cases, the cost of using a cloud-based database can vary depending on the specific needs of the user and the pricing model of the provider. In some cases, the cost of using a cloud-based database may be higher than the cost of running an on-premises database.

OpenCloudification for Databases

OpenCloudification can provide guidance and best practices on adopting the best database solutions, depending on needs, use cases and technologies. A regularly updated list will illustrate some of the tools and components of interest addressed during the OpenCloudification activities. For more information on each technology, check the individual pages on the OpenCloudification website and the official documentation and website.

Cloud Native Computing Foundation technologies both commercial (licensed) and non-commercial (shareware):

Other Technologies:

  • Firemon


Containerization is the technology that allows you to package an application and its dependencies together in a container, which can then be run on any machine that has a container runtime installed. This allows you to deploy and run applications in a predictable and consistent manner, regardless of the underlying infrastructure.

Containers are isolated from each other and the host operating system, which makes them lightweight and portable. This makes them an attractive alternative to traditional virtualization, which requires a full operating system installed on each virtual machine.

Containers are often used in conjunction with microservices, which are small, modular units of code that can be developed, tested and deployed independently of each other. This allows for faster development cycles and easier maintenance and updates of complex applications. Containers are also the usual way to implement serverless computing. In this model, you package your application and its dependencies into a container, and the cloud provider runs the container on demand. This allows you to take advantage of the benefits of containerization, such as portability and isolation, while also taking advantage of the pay-per-use model of serverless computing in public clouds. Open-source solutions are also available for serverless architectures.

Containers vs Virtual Machines (VMs)

Containers and VMs are both technologies that allow you to run applications in an isolated environment. However, there are some key differences between the two:

  • Isolation: Both containers and VMs provide isolation, but they do it in different ways. VMs use hardware virtualization to create a separate environment for each application, which includes its own operating system, system libraries, and application code. Containers, on the other hand, share the host operating system and use namespaces and cgroups to provide isolation. This means that containers are generally lighter weight and use fewer resources than VMs.
  • Portability: Containers are generally more portable than VMs because they do not include a full operating system and are therefore more lightweight. This makes it easier to move containers between environments, such as from a developer’s laptop to a staging environment to production. VMs, on the other hand, are more complex and include a full operating system, which makes them less portable.
  • Performance: VMs can offer better performance than containers because they have their own dedicated operating system and hardware resources. However, containers can still provide good performance in many cases, especially if the host system has sufficient resources.
  • Scalability: Containers are generally easier to scale than VMs because they are lightweight and do not require a full operating system. This makes it easier to add or remove containers as needed to meet changing demands.

To summarize, containers and VMs are both technologies that allow you to run applications in an isolated environment, but they differ in terms of isolation, portability, performance, and scalability. The right choice for your needs will depend on your specific requirements and use case.

Differences between containers and VMs. Source: Docker, What is a container?

Key takeaways for Containers


  • Portability. Containers are portable, meaning that they can be easily moved from one environment to another. This makes it easier to deploy applications on different platforms, such as from a developer’s laptop to a staging environment to production.
  • Isolation. Containers are isolated from each other and the host operating system, which means that they do not interfere with each other or with the host system. This makes it easier to run multiple applications on a single machine without worrying about conflicts.
  • Resource efficiency. Containers are lightweight and use fewer resources than traditional virtualization. This makes it possible to run more applications on a single machine, which can be useful in environments where resources are limited.
  • Ease of deployment. Containerization tools like Docker make it easy to build, manage, and deploy containers, which can be useful in environments where there are frequent updates or new applications are being added frequently.


  • Security. Containers are isolated, but they are not entirely secure. It is important to carefully manage access to containers and to ensure that they are properly patched and updated to prevent security vulnerabilities.
  • Complexity. Containerization can add complexity to an application, especially if you are using multiple containers or orchestrating them with a tool like Kubernetes. It can take time to learn how to use these tools effectively.
  • Performance. While containers are generally lightweight and efficient, they may not provide the same level of performance as a dedicated physical machine. This can be a concern in environments where high performance is critical.

Open-source tools

Several tools and technologies are available to help you get started with containerization. The most popular container engine is Docker, an open-source tool for building, deploying, and managing containers via a simple command-line interface, making it easy to use and widely adopted by developers. Rkt, Containerd, and LXC are alternatives to Docker. Other open-source tools, more focused on the orchestration of containers, include Kubernetes, OpenShift, and Nomad. Containerization platforms are also available in the public cloud, like Amazon Elastic Container Service (ECS) or Google Kubernetes Engine (GKE) to manage your containers when deployed in the cloud.

Open-source Private Cloud

A private cloud is a type of cloud computing that offers similar benefits to the public cloud, including scalability and self-service, but with a proprietary architecture. Usually, it is dedicated to a single organization and can be hosted either on-premises or in a data center owned and managed by a third party. The main difference between public and private clouds is that private clouds offer a higher level of control, security, and customization since all infrastructure is dedicated to a single organization. This makes private clouds ideal for organizations with strict security and compliance requirements, such as those that handle sensitive data. To further remark on the difference, here is a list of the key points that distinguish a private from a public cloud:

  • Ownership: Private clouds are owned and operated by a single organization and dedicated to its use, while public clouds are owned and operated by third parties and are available to the public.
  • Availability: Private clouds are only available to the organization that owns them, while public clouds are available to anyone with an internet connection who needs computing resources.
  • Control: Private clouds give enterprises full control over the infrastructure and its configuration, while public clouds offer limited control over the underlying infrastructure and its configuration.
  • Security: If compared to public clouds, private clouds can offer a higher level of security as they are dedicated to a single organization and can be configured to meet certain requirements and specific security requirements.
  • Cost: Public clouds are often cheaper than private clouds because infrastructure costs are shared among many users. Private clouds can be more expensive due to the cost of building and maintaining a dedicated infrastructure.
Source: Cloudflare, “What is the difference between a public cloud and a private cloud?”

Benefits of a Private Cloud

Private clouds have several advantages over public clouds and traditional on-premises infrastructure. One of the main benefits of a private cloud is the ability to customize the infrastructure. This gives the company the ability to configure and tailor the cloud to the unique requirements and preferences of a particular organization. This customization can include specifications like choosing exact hardware and software configurations, setting up network and security configurations, and defining storage and compute resources. This offers more flexibility and control than public clouds, which can have customization limitations. For example, an organization might start with a small private cloud infrastructure, but then it could need to add more resources in terms of computing and storage, as the company expands.

This greater flexibility can also be seen in terms of software and not only for hardware configuration. For example, an organization may need specific applications to support its business operations and can easily install and configure those applications to work in a private cloud. Deployment configurations can also include network and private cloud security, for example, when implementing specific firewall rules or network segmentation to meet security requirements.

Another advantage of private clouds is the increased security they offer. Because private clouds are designed for a single organization, they can be configured to meet the organization’s specific security requirements and provide, for example, a secure and controlled environment when dealing with sensitive data. Private clouds can also help organizations meet compliance requirements such as data protection laws.

Despite the higher initial cost, private clouds can also become less expensive over time than public clouds, as companies can avoid ongoing costs such as license fees and maintenance costs. Having full control of the infrastructure and settings, private clouds can also be integrated into an organization’s existing infrastructure, making it easier to manage and maintain.

Private Cloud and Open-Source Interoperability

Private cloud interoperability with open-source technologies refers to the ability of private cloud infrastructure to seamlessly integrate and work with open-source software and tools. Many platforms’ private cloud solutions, such as OpenStack, are based on open-source technologies that provide a large and active community of developers and users. This community can provide private cloud organizations with a wealth of knowledge, tools, and resources so they can easily integrate their cloud infrastructure with open-source software and tools. For example, an organization can use open-source databases like MySQL or PostgreSQL, in its own private cloud infrastructure. The private cloud platform can be configured to work seamlessly with these databases, providing a unified and integrated solution. In addition, many open-source software and tools are designed to be highly interoperable with various cloud platforms, including private clouds. This means enterprises can easily integrate their private cloud infrastructure with a range of open-source software and tools such as automation tools, monitoring tools and security tools. In summary, the interoperability of Private Clouds with open-source technologies offers companies a flexible and highly integrated solution that allows them to easily integrate their cloud infrastructure with a wide range of open-source software and tools.

Some of the most used open-source private clouds are the following:

  • OpenStack: OpenStack is a widely used open-source cloud computing platform that provides IaaS (Infrastructure as a Service) capabilities. It’s a modular platform that allows companies to add or remove components as needed easily. It offers a wide range of features, including computing, storage, and network management, as well as integration with various open-source and commercial tools.
  • CloudStack: CloudStack is another open-source cloud computing platform with IaaS capabilities. It is designed for high scalability and is ideal for companies with large and complex cloud infrastructures. It offers as well a range of features including computing, storage and network management, and integration with various open-source and commercial tools.
  • Apache Mesos: Apache Mesos is the first open-source cluster manager that efficiently manages workloads in a distributed environment using dynamic provisioning and resource isolation. It makes it easier to build flexible, fault-tolerant distributed systems and run them efficiently.
  • Eucalyptus: Eucalyptus is an open-source private cloud platform, which provides IaaS and is highly compatible with Amazon Web Services (AWS). This compatibility makes it a popular choice for companies moving workloads from AWS to a private cloud.

While in principle these open-source private clouds offer similar features, it is important to point out some of the most notable differences they have, like their architecture, scalability and compatibility with other tools and platforms. For example, OpenStack is a highly modular platform that gives organizations more flexibility to add or remove components. CloudStack, on the other hand, is designed for high scalability, making it an excellent choice for companies with large and complex cloud infrastructures. Mesos kernel runs on every computer and offers applications (e.g., Hadoop, Spark, Kafka, Elasticsearch) with APIs for managing and scheduling resources in data centers and cloud environments. At the same time, it is cloud service provider independent. Eucalyptus is designed for high compatibility with AWS, making it an ideal choice for companies looking to move workloads to a private AWS Cloud.

Do you want to know more about the benefits and challenges of building your private cloud? Read our focus on OpenStack here.

OpenCloudification Asks

Which Open Cloud Technologies are you using or considering to use?