Categories
Uncategorized

[fts_instagram instagram_id=17841407955171807 access_token=IGQVJYbTJISHRoMGNiY29fVXp2a25FVktjcEE2NEgzaS16dW1CdkEtMnVCSUctQUpxUmtzbTJia2c4NHVJNXIzY1BZAUXZAMc3UyOUlRdHdGQi1oRmd2ZAlVBR3NNOUVmcy1PWUlCaTdB pics_count=6 type=basic super_gallery=yes columns=3 force_columns=no space_between_photos=1px icon_size=65px hide_date_likes_comments=no]

Categories
Uncategorized

Cloud Adoption Framework

Cloud Adoption Framework

By Douglas Bernardini

Enterprise Architecture, as a discipline focused on connecting an enterprise’s current reality to one desired in the future, can contribute to enterprises when it comes to managing cloud-based systems. A primary benefit related to EA and cloud includes seeing how and where newer, highly disparate cloud systems might fit with legacy versions.

Enterprise Architecture (EA) functions to assist enterprises in building structural foundations to match proposed business strategies. It captures the vision of an enterprise by integrating its dimensions to contextualize transformation strategies, organizational structures, business capabilities, data pools, IT applications, and all technology objects. Every business unit of an enterprise is subject to change, and each change may have significant consequences throughout organizational domains.

An enterprise that wants to adopt the cloud across all the business units must have a mature and well-formed understanding of its Enterprise Architecture and a clear view of components therein.

Cloud Computing is a paradigm to decentralize data centers by virtualizing both infrastructure and platform and enabling services using the internet. It gives access to platforms, services, and tools from browsers deployed across millions of terminals. As well, it reduces the management and maintenance of all the resources associated with technology and infrastructure while providing dynamism, independence, portability, usability, and scalability of platform tools.

Amazon Web Services, Google, Microsoft Azure, are market leaders of cloud services.

Next generation enterprise trends and Cloud Adoption

Top business trends as a result of cloud adoption include:

  • Expanding ecosystems of applications to stimulate market responsiveness.
    Shifting business models to become integrators of best-of-breed services.
  • New regulatory requirements gained by a collaborative global economy and resulting from the need to address open markets.
  • Leveraging digital proliferation to deliver more intelligent, predictive customer experience.
  • Transformation and optimization across different process stacks: Sales, Front-office, Middle office and Back office.

Top technology trends as a result of cloud adoption include:

  • Data center rationalizing (hybrid options to replace data centers).
  • Movements of IT development and testing to cloud.
  • Maximizing productivity with scalability and high availability. Top-tier functions are increasingly moving to private cloud and secondary ones to public cloud.
  • Emergence of cloud service brokerages—a shift to hybrid models.
    Elevating traditional services by offering new digitized products like cloud-based storage for customer files.
  • Fusing IaaS with PaaS.

Cloud Adoption framework based on enterprise architecture

Enterprise Architecture, as a discipline focused on connecting an enterprise’s current reality to one desired in the future, can contribute to enterprises when it comes to managing cloud-based systems. A primary benefit related to EA and cloud includes seeing how and where newer, highly disparate cloud systems might fit with legacy versions.

A well-run EA program can streamline cloud transformations using best practices developed from IT strategy, business policy, organizational planning, and stakeholder decision-making. However, in order to integrate EA with the cloud computing services of an enterprise, a framework needs to be established whereby EAs manage all stages of an enterprise’s cloud adoption.

Cloud Adoption strategy

This is the first step in cloud adoption. At this stage, the tasks typically involve collecting all artifacts and related information about an enterprise’s current “as-is” state and all formal procedures for the daily operations of existing EA. One should use this stage to analyze the needs, requirements, and trends in each of the business units of an organization while validating the potential weaknesses, strengths, opportunities, and threats in the adoption of a cloud.

It is necessary to develop an understanding at this stage of the business’s overall strategy and its organizational goals.

Further, a Cloud Adoption Strategy stage is the time to outline all information about the expected goals of target architecture, the identities of relevant stakeholders, the complexity of architectural visions, and the various approvals required when initiating change.

Cloud Adoption planning

Use this stage to understand the as-is architecture and existing EA across the organization. Doing so involves defining business models according to operational roles and activities, and the gathering of operational costs. It is a time to align requirements and motivations for cloud migration with EA models.

Cloud Adoption Planning is about envisioning the opportunities available for when cloud computing is actually implemented, describing these possible benefits in clear detail, and then evaluating it all to the concerns of relevant stakeholders and the capabilities of potential vendors. Following this awareness, an appropriate cloud environment for applications—as based on cost efficiency and performance—can be chosen.

Cloud business case

EA can broaden a business case for cloud adoption by providing an understanding of overarching capabilities needed to support the implementation and ongoing maintenance of the new platform. Such methods might include delineating the business case and ROI inputs to estimate the required budget, assessing the ease-of-adoption in technical terms, and then selecting the cloud provider.

The following activities should occur in this stage:

  • Identifying all viable business alternatives associated to any proposed cloud technology in order to hold a holistic overview of upcoming opportunities.
  • Identifying and structuring all benefits to a business for cloud transformation as based on explicitly detailed degrees of impact.
  • Calculating the budget required and the ROI expected during EA changes.
    Estimating migration timeframes.
  • Analysis of the present risks and an ongoing approach for identifying and mitigating future risks.
  • Target architecture & cloud enabler

The use of new cloud services to augment target Business Architecture, Information System Architecture, and Technology Architecture is detailed at this stage. Spanning also Information Architecture (physical and logical data models), Application Architecture (functional components, interrelationship between systems), and Technology Architecture (hardware, software, and communication infrastructure).

This stage should be when all information is identified on target architecture that can be used to accommodate cloud transitions and help detail business requirements. The impact of architectural changes on business models should also be defined at this stage.

Cloud transition planning

Cloud Transition Planning is when all technical gaps between target and baseline EA must be recognized and then logically described. It is an analysis for identifying the shortfalls between actual and potential performance and then using this analysis to create a change management plan.

The creation of a detailed plan for the actual implementation and migration from the existing to the target architecture occurs during this phase.

Cloud Transition Planning typically consists of the following activities:

  • Laying out a cloud migration plan strategy that lists the required processes, tools, and business chargeback models for migrating existing business applications to the cloud, placement decisions for new applications.
  • Assessing all enterprise applications to determine where business function “bundle” descriptions fall into, and the relationships between applications and bundles. Such information will then be used to recommend a suitable market-based cloud product of relevant business value.
  • Given that cloud providers make such implementation arrangements, it is wise to review the many desired requirements of target architecture to obtain from vendors a worthwhile service lifecycle arrangement (SLA), a properly configured network setup, and a clear understanding of how it can be integrated to operate with other clouds.

Cloud implementation planning & governance

At this stage, final confirmation is received on the scope and priorities of cloud migrations and deployment. SLAs are established, as are policies and security standards, and the allocation of authority and responsibility is distributed. Cloud Implementation Planning and Governance cover the strategy-to-execution phases of a cloud adoption strategy.

Identification of all deployment resources and skills is required during this stage. EA compliance reviews are performed here, and the implementation of business and IT operations plus post-implementation reviews occur.

Other activities performed during this stage include:

  • Evaluating business-level policies.
  • Understanding differences in service and deployment models.
  • Identifying critical performance objectives.
  • Evaluating security and privacy requirements.
  • Identifying service management requirements.
  • Preparing for service failure management.
  • Understanding recovery plans.
  • Establishing effective management processes.

Creating the exit process.

Any and all results from these many monitoring activities must be documented and shared in a post-implementation review to offer input for further improvements in future projects. Of note, long-standing providers of enterprise architecture management software have extended their offerings with dedicated products for end-to-end cloud implementation and governance planning.

Cloud technology is playing a major role in the transformation of modern enterprises. Though it is not a complete solution to the problems of on-premise solutions, transferring certain enterprise applications and processes to the cloud can certainly minimize many of the organizational hurdles once simplified only with good EA management.

Cloud adoption frameworks

Furthermore, as cloud technology matures, the core ways in which businesses operate will continue to change. It is thus necessary for EA to mitigate the speeds of disruption at a level carefully aligned to an enterprise’s capacity for change.

EA enables organizations to undergo digital transformation to implement new cloud systems with considerably fewer complications, and an EA Framework for Cloud Adoption like the one presented is an approach that can easily be followed to reduce development times, improve scalability, expand storage capacities, improve the reliability of services, and fortify security.

To repeat, cloud computing can assist enterprises in:

  • Moving to an agile operating model and removing supposedly “indispensable” technology.
  • Attracting top digital talent from across industries and providing them a toolbox of the best technologies available.
  • Transforming architecture to scale capabilities and enable dynamic API-based interactions.
  • Bringing cloud-native capabilities to applications and upgrading them to deliver real-time metrics and actionable data.
  • Creating digital capabilities for accelerating revenue by using an agile and opex model of cloud.
  • It all goes to show that no matter what tools enterprises choose to use, the core problem is not always with the technology. It is in defining the relationships between different components—from Business to IT.
Categories
Uncategorized

Cloud Architecture

Multi-Cloud Architecture

by Douglas Bernardini

Cloud providers have been innovating at a great speed and continuously releasing new services and features. You have multiple services to do the same thing. Also, you have a number of ways to design and architect your applications and workloads using different services, features and patterns. A growing list of options is increasing the complexity in designing and architecting a cloud application.

Furthermore, different cloud architects can come up with a completely different architecture for the same business requirement. There is no definite and consistent way to confirm if the architecture meets the needs of modern business applications such as security, reliability, performance. As your business grows or change over time, your workloads need to scale and adapt to the new business requirements, which adds considerable overhead and complexity to your technology architecture.

Fortunately, the three hyperscale cloud providers have realized this growing complexity. All three: Amazon Web Services, Microsoft Azure and Google Cloud Platform (GCP), have come up with a cloud architecture framework for their respective clouds.

These cloud architecture frameworks provide architecture guidance, design principles, best practices, and additional tools and resources for designing and operating applications in the cloud. Interestingly, all of them are based on common architectural pillars.

The three major providers are recommending these common pillars for a well-architected application. You should also have measures and processes in your organization to ensure that your cloud architects are designing and reviewing applications based on these pillars.

In this article, you will find details about the cloud architecture framework for the three largest providers: AWS, Azure and GCP. They also provide related tools and frameworks. You will learn about the complete landscape of frameworks and tools, and what features they provide. You can use this information to strategize how you want to leverage these frameworks and tools in your organization.

In the second part of this article (that will be published later), I’ll get deeper into why all the providers chose exactly the same pillars in their frameworks. What is common among them, how to use them in your applications, and what can you do to take full advantage of them for your organization’s specific business needs.

The three hyperscale providers (AWS, Azure and GCP) provide different types resources such as frameworks, tools and add-ons to help you adopt cloud, and design well-architected cloud applications. Most of these resources use common pillars, to categorize their guidance and recommendations.

Common Pillars
Cloud architecture frameworks from all three providers are based on five common pillars:

  • Cost Optimization
  • Operations Excellence
  • Reliability
  • Performance Efficiency
  • Security

Architecture Frameworks

The architecture frameworks are the core foundation of the landscape. They provide a set of guiding tenets to build high-quality, secure, high-performing, resilient, and efficient cloud applications and workloads. You cannot have a single approach for designing an architecture that would fit all kinds of requirements. However, there are some general concepts that will apply regardless of the architecture, technology, or cloud provider. Hence, they help you to formulate a consistent approach to evaluate and implement cloud architectures. They provide guiding principles and best practices, which are categorized across five common pillars, Cloud CORPS.

It is not just the pillars that are common across the frameworks, in fact, the two providers (AWS and Azure) have named their frameworks exactly the same: Well-Architected Framework. Though GCP named it differently: Google Cloud Architecture Framework. GCP also has slightly different names of some pillars. The company engineers call Security pillar as “Security, privacy, and compliance.” And, they have combined “Performance Efficiency” and “Cost Optimization” pillars into one and call it as “Performance and cost optimization.”

It is interesting to know a lot happened on architecture framework in the last year (2020). Google launched their cloud architecture framework in May 2020. AWS released the eighth version of its Well-Architected framework in July 2020. They had released it initially in 2015, which was the fourth version (as the first three versions were internal). Microsoft also launched their Azure Well-Architected framework in July 2020. They kept the framework name as well pillars name exactly as in AWS.

Architecture Framework Tools

AWS and Azure also provide tools to evaluate your workloads and applications against the latest guidance provided by their well-architected frameworks. These tools are assessment questionnaires where you provide answers to many questions under each pillar. The questions have multiple-choice options. Based on your answers, the tool provides you recommendations with a detailed description and a list of resources to implement those recommendations. I expect that Google would also be releasing a similar assessment tool soon.

Architecture Framework Add-ons

Add-Ons extend the guidance offered by the base architecture framework to specific industry and technology domains, such as machine learning, analytics, serverless, high-performance computing (HPC), IoT (Internet of Things), and financial services. You can add the applicable add-ons to evaluate your workloads completely. At this point, only AWS offers such add-ons, called AWS Well-Architected Lenses. Considering that AWS added the lenses to its well-architected family in their sixth version, I expect other providers also to release similar add-ons in their future releases.

Advisor Tools

AWS and Azure also have related tools called AWS Trusted Advisor and Azure Advisor, respectively. The architecture framework tools depend on you to provide answers to generic assessment questions to offer recommendations, which are also generic in nature. On the other hand, the advisor tools do not depend on you. They scan, inspect and analyze your deployed workloads and offer personalized and actionable recommendations, which are specific to your deployments. The recommendations are categorized into different areas, which are either the same or related to pillars. While Google does not provide such a tool at present, I expect it to release one in the future.

Cloud Adoption Frameworks

The architecture frameworks assume that you already have adopted cloud in your organization and hence focuses primarily on designing and operating your workloads successfully. Hence, their key target audience is cloud architects, engineers and developers. However, if you need guidance on how to adopt cloud in your organization to realize the maximum benefits, all three providers also have their ‘Cloud Adoption Framework.’ Google also provides a cloud maturity assessment survey as part of their adoption framework. The cloud adoption frameworks help you identify key cloud adoption activities, gaps and objectives in terms of people, technology and processes. They provide best practices, documentation, and tools to help you create and implement the business as well as technology strategies necessary for your organization to succeed in the cloud. Hence, their key target audience are cloud architects, IT professionals and business decision-makers.

The two types of frameworks, the architecture and adoption frameworks are interconnected. The cloud adoption frameworks help at three high-level strategies: business, platform and workloads. The architecture frameworks can be considered as part of the adoption framework at the workload strategy level.

Leveraging the Cloud Landscape

The three cloud providers released these different types of resources in the Cloud CORPS landscape at different times in a disjoint manner. However, as most of these are based on basic CORPS pillars, I expect them to become more integrated and connected with each other in the coming months. Microsoft has already started doing that. Their Azure Advisor tool provides recommendations categorized exactly as per CORPS pillars. Their adoption framework refers to a well-architected assessment. I expect AWS and GCP may follow similar approaches to provide consistent connections among all these resources.

As a cloud professional or business owner, it is important that you understand the different types of resources in this landscape to ensure the success of your cloud journey and investments. You should also understand the relationship between them, and how and when to leverage them in your cloud journey. Even when you are targeting just one cloud for your organization or application, it is worthwhile to look at frameworks and guidance from the other cloud providers to see if there are additional aspects that you can leverage. For example, GCP does not have an architecture framework assessment tool at this point. If your application is deployed in GCP, you can still use questionnaire from Azure or AWS, as many questions are around processes and general architecture guidance, and are agnostic to the cloud provider. Even if there are recommendations about using a specific cloud service, you should be able to find the equivalent one in GCP. Similarly, Azure and GCP do not provide domain-specific add-ons at this point. If your application is deployed in these clouds, you can still leverage AWS Well-Architected Lenses for general guidance on those domains.

Categories
Uncategorized

Cloud Troubleshooting

Cloud Troubleshooting

by Douglas Bernardini

Those diagnosing a technical problem with cloud infrastructure are seeking possible explanations (hypotheses) and evidence that explains the problem. In the short term, they look for changes in the system that roughly correlate with the problem, and consider rolling back, as a first step to mitigate the problem and stop the bleeding. The longer-term goal is to identify and fix the root cause so the problem will not recur.

From the site reliability engineering (SRE) perspective, the general approach for troubleshooting is as follows:

  • Triage: Mitigate the impact if possible
  • Examine: Gather observations and share them
  • Diagnose: Create a hypothesis that explains the observations
  • Test and treat: Identify tests that may prove or disprove the hypothesis
  • Execute the tests and agree on the meaning of the result
  • Move on to the next hypothesis; repeat until solved

When you’re working with a cloud provider on troubleshooting an issue, there are parts of the process you’re unable to control. But you can follow the steps on your end. Here’s what you can do when submitting a report to your cloud provider support team.

1.Communicate any troubleshooting you’ve already done.

By the time you open an issue report, you’ve probably done some troubleshooting already. You may have checked the provider’s status page, for example. Share the steps you’ve taken and any key findings. Keep a timeline and log book of what you have done and share it with the provider. This means that you should start keeping a log book as soon as possible, from the start of detection of your problem. Keep in mind that while cloud providers may have telemetry that provides real-time omniscient awareness of the state of their infrastructure, the dependencies that result from your particular implementation may be less obvious. By design, your particular use of cloud resources is proprietary and private, so your troubleshooting vantage point is vital.

If you think you have a diagnosis, explain how you came to that conclusion. If you think others can reproduce the issue, include the steps to do so. A reproducible test in an issue report usually leads to the fastest resolution.

You may have an idea or guess about what’s causing the problem. Be careful to avoid confirmation bias—looking for evidence to support your guess without considering evidence to the contrary.

2. Be specific and explicit about the issue

If you’ve ever played the telephone game, in which players whisper a message from person to person, you’ve seen how human translation and interpretation can lead to communication gaps. Rather than describing information in your provider communications, try to share it. Doing so reduces the chance that your reader will misinterpret what you’re saying and can help speed up troubleshooting. Don’t assume that your provider has access to all of this information; customer privacy means that they may not, by design.

For example:

Use a screenshot to show exactly what you see
For web-based interfaces, provide a HAR (Http ARchive) file
Attach information like tcpdump output, logs snippets and example stack traces

3. Report production outages quickly

An issue is considered to be a production outage if your application has stopped serving traffic to users or is experiencing similar business-critical impact. Report production outages to your cloud provider support as soon as possible. Issues that block a small number of developers in a developer test environment are normally not considered production outages, so they should be reported at lower priorities.
Normally, when cloud provider support is alerted about a production outage, they quickly triage the situation with the following steps:

  • Immediately check for known issues affecting the infrastructure.
  • Confirm the nature of the issue.
  • Establish communication channels.

Typically, you can expect a quick response with a brief message, which might contain:

Whether or not there is a known issue affecting multiple customers
An acknowledgement that they can observe the issue you’ve reported or a request for more details

How they intend to communicate (for example, phone, Skype, or issue report)
It’s important to quickly create an issue report including the four critical details (described in part one and then begin deeper troubleshooting on your side of the equation. If your organization has a defined incident management process (see Managing Incidents), escalating to your cloud provider should be among your initial steps.

4. Report networking issues with specificity

Most cloud providers’ networks are huge and complex, composed of many technologies and teams. It’s important to quickly identify a networking-specific problem as such and engage with the team that can repair it.

Many networking issues have similar symptoms, like “can’t connect to server,” at a high level. This level of detail is typically too generic to be useful in identifying the root cause, so you need to provide more diagnostic information. Network issues relate to connectivity, which always involves at least two specific points: source and destination. Always include information about these points when reporting network issues.

5. Escalate when appropriate

If circumstances change, you may need to escalate the urgency of an issue so it receives attention quickly. Take this step if business impact increases, if an issue is stuck without progress after a lot of back-and-forth with support, or if some other factor calls for quicker resolution.

The most explicit way to escalate an issue is to change the priority of the issue report (for example, from P3 to P2). Provide comments about why you need to escalate so support can respond appropriately.

6. Create a summary document for long-running or difficult issues

Issue state and relevant information change over time as new facts come to light and hypotheses are ruled out. In the meantime, new people join the investigation. Help communicate relevant, up-to-date information by collecting information in a summary document.

Categories
Uncategorized

Cloud Monitoring

Cloud Monitoring Best Practices To Implement Now. The following best practices can help you to improve your cloud monitoring strategy:

  1. Establish goals for your cloud monitoring investment so that you can measure progress.
  2. Set up a process for continuous monitoring and improve it as you gather more information.
  3. Collect different teams’ insights about metrics are important to monitor and what to do with the data.
  4. Map monitoring metrics to actual business outcomes within your organization.
  5. Monitor as many of the components that directly affect your business’s bottom line as possible.
  6. Monitoring tools provide engineers with the ability to observe what happened during multi-point failures, allowing them to troubleshoot and debug them.
  7. You need to set thresholds that inform engineers when to react to issues and fix them before they become huge problems for your end-users.
  8. Start with simple, native tools that your cloud service provider provides before integrating a more robust cloud monitoring solution.
  9. Centralize your monitoring data and display it via unified dashboards and charts. This reduces the need for using multiple tools, services, and APIs to monitor different data.
  10. Automate cloud monitoring. It is possible to conduct monitoring manually. However, the process can be time-consuming and prone to human error.
  11. Monitor your cloud costs. Many tools lack complete cost visibility, especially within public and hybrid clouds. Implement a cloud-based cost intelligence solution to see the what, why, and how of your cloud investment. A tool that displays data in a way that makes sense to your business, such as cost per customer, team, or product, is even better.
  12. Monitor end-user experience. Crash reports, response times, network requests, and page loading details are some metrics that can help you do so.
  13. Run regular chaos tests on your cloud monitoring strategy and tools. Improve your cloud-based applications, services, and architecture as you collect, analyze, and gain insights from more data.
Categories
Uncategorized

Cloud Automation & Orchestration

What Is Cloud Automation?
Cloud automation is the use of automated tools and processes to execute workflows in a cloud environment that would otherwise have to be performed manually by your engineers, like configuring servers or setting up a network.

It enables you to take advantage of cloud resources efficiently and to avoid the security pitfalls that arise in contexts where teams rely too heavily on manual, error-prone workflows.

Cloud automation should therefore be a central component of your overall cloud strategy. Understanding what you can automate in the cloud, and which cloud automation tools can help you achieve the level of automation you need, is essential for leveraging the cloud effectively and at scale. Equally important are practices and tools associated with cloud orchestration, a domain that is related to but distinct from cloud automation (more on that later).

But first, what makes cloud automation different from regular automation in an on-premise IT environment?

Cloud Automation Vs On-Prem Automation

Cloud automation is not fundamentally different from automation in other types of contexts, such as on-premises.

Indeed, in some cases you can use the same automation tools in both the cloud and on-premises (although other automation tools work only with the cloud). If you’ve ever used disk imaging software to configure on-premises PCs automatically, for instance, or used monitoring tools on your local network to perform automatic restarts of servers when they crash, you’re already familiar with the principles behind cloud automation.

However, cloud automation is distinguishable by the following:

1. Cloud automation focuses on automating services and virtual infrastructure

The main difference between cloud automation and other types of automation lies in the types of services to which cloud automation applies.

Because cloud-based environments give users different levels of access to resources than they would have on-premises (for example, cloud environments don’t typically provide end-users with control over physical servers), cloud automation focuses more on automating services and virtual infrastructure than it does on physical devices.

2. Cloud automation is key to handling the scalability and complexity of cloud environments

You may be able to manage a half-dozen local servers by hand easily enough. But in the cloud, where there are dozens of different types of VM instances to choose from, and where it’s important to avoid running servers when they’re not necessary in order to avoid wasted costs, automation plays an especially crucial role.

To put this another way, the cloud is a prime candidate for automation, even more so than other types of IT environments.

Because cloud environments consist of an array of different types of services that can be scaled up and down constantly, automating the management of cloud resources is critical for getting the most value out of the cloud. If you attempt to manage your cloud by hand, you simply won’t be able to take greatest advantage of the opportunities for scalability and agility that the cloud offers to your broader IT strategy.

Use Cases for Cloud Automation
Cloud automation can apply to a wide variety of workflows and tasks. We’ve outlined six key use cases below.

Cloud automation can apply to a wide variety of workflows and tasks. We’ve outlined six key use cases below.

Infrastructure provisioning
Probably the most obvious example of cloud automation is the use of cloud automation tools for infrastructure provisioning.

When you need to set up a collection of virtual servers, for example, it would take a long time to configure each one individually. Cloud automation tools like HashiCorp Terraform or AWS CloudFormation allow you to perform this task automatically by creating templates that define how each virtual server should be configured. The tools then apply the configurations for you.

You can take a similar approach to configuring other types of cloud resources, such as network setups and storage buckets or volumes. In general, any type of cloud automation tool that supports this type of infrastructure provisioning is known as an Infrastructure-as-Code, or IaC, tool.

Note, however, that IaC tools are not strictly limited to use in the cloud. Many are platform-agnostic and work with on-premises environments, too. On the other hand, certain IaC tools, especially those made available by cloud vendors themselves, typically work only with a specific public cloud.

By automating cloud infrastructure provisioning, organisations can scale their cloud infrastructure more quickly. In turn, they gain agility and an enhanced ability to innovate.

Identity provisioning and management
In large-scale cloud environments, a single company may have hundreds of different users, each requiring a different level of access to the various resources in the cloud. Setting up all of these access policies by hand would be a monumental task. Updating them as business needs change and users come and go from the organisation would be harder still.

Using cloud automation, identity management becomes much more efficient. You can use predefined Identity and Access Management (IAM) templates to set up user roles within your cloud environment.

You can also integrate your cloud IAM framework with a centralized enterprise directory service, like Microsoft Active Directory, to centralize identity management across your entire IT infrastructure, including on-premises resources as well as the cloud.

The automation of identity management within the cloud adds organizational agility by making it easier and more efficient to onboard new team members, modify the roles of existing ones and revoke access for employees who leave the company.

Application deployment
Application deployment, which refers to the process of moving a new application release or version from the environment where it was built and tested into the one where it will run in production, can be a time-consuming task if performed by hand.

It’s especially inefficient if you embrace the principles of DevOps and continuous delivery, which may entail pushing out a dozen or more new releases each week.

Cloud automation can help by automatically handling the application deployment process for you. Most modern CI/CD platforms, such as Jenkins, can automatically deploy applications into any major public cloud.

Public cloud vendors themselves also offer automated application deployment solutions, like Azure App Service and AWS CodeDeploy.

By automating application deployment in the cloud, development and IT teams achieve faster release cycles. By extension, they can push out new application features and fix bugs more quickly.

Monitoring and remediation
Once you have provisioned your cloud infrastructure, configured user credentials and deployed workloads, you need to monitor them and respond to incidents that may impact application performance. This is another juncture at which automation is very valuable.

Most public clouds offer built-in monitoring solutions, such as AWS CloudWatch, that automatically collect metrics from your cloud environment. They allow you to configure alerts that will be triggered when certain predefined thresholds are met, such as a cloud server running out of memory or a cloud database that has become unresponsive.

A variety of third-party vendors offer cloud monitoring solutions that allow you to do the same thing. Some also extend their functionality into the realm of automated remediation, which makes it possible to write predefined workflows that the tools automatically execute in response to certain conditions.

For instance, you could configure a workflow so that in the event that a virtual machine fails, another one will be automatically created based on an IaC template that you created ahead of time.

For companies with large-scale cloud environments, the ability to automate monitoring and take automatic steps to fix problems detected by monitoring systems leads to more stable and higher-performing clouds.

Multi-cloud management
Cloud automation is also becoming increasingly crucial in the context of multi-cloud architectures, in which companies use multiple public or private clouds at once.

Cloud automation tools play an important role in this type of environment by allowing teams to deploy workloads to multiple clouds at once and manage them from a central interface, rather than having to juggle disparate tools for each of the clouds they use.

For example, OneOps, a cloud management platform originally developed by Walmart that is now open source, can automate the deployment of applications to multiple public clouds. Monitoring and performance-optimization tools that work with multiple clouds also enable a type of multi-cloud automation.

For organizations with a multi-cloud strategy, being able to manage all of their clouds with a centralized, automated toolset adds crucial efficiency to their cloud strategy.

Data discovery and classification
Another use case for cloud automation — one that is currently relatively rare, but likely to become increasingly important as more and more organizations face stricter compliance requirements from regulations like GDPR — is the automated discovery and classification of data in the cloud.

Tools like AWS Macie can automatically scan cloud environments for data that may be sensitive in nature. They may also be able to identify situations where data is improperly secured; for instance, they could alert admins to an AWS S3 bucket that contains private address data and can be accessed by anyone on the Internet. Third-party data discovery and classification tools for the cloud are available as well, such as Open Raven.

Because discovering and classifying sensitive data by hand would require enormous time and effort, the automation of these processes enables much faster and more efficient protection of sensitive information. In turn, it helps organisations meet compliance goals.

Cloud Automation Benefits
Cloud automation offers a range of benefits:

Time savings: By automating time-consuming tasks like infrastructure provisioning, cloud automation tools allow human engineers to focus on other activities that require higher levels of expertise and cannot be easily automated.

Faster completion: Cloud automation enables tasks to be completed faster. An IaC tool can set up a hundred servers in minutes using predefined templates, for instance, whereas a human engineer might take several days to complete the same work.

Lower risk of errors: When tasks are automated, the risk of human error or oversight virtually disappears. As long as you properly configure the rules and templates that drive your automation, you will end up with clean environments.

Higher security: By a similar token, cloud automation reduces the risk that a mistake made by an engineer — such as exposing to the public Internet an internal application that is intended only for internal use — could lead to security vulnerabilities.

Scalability: Cloud automation is essential for any team that works at scale. It may be possible to manage a small cloud environment — one that consists of a few virtual machines and storage buckets, for example — using manual workflows. But if you want to scale up to hundreds of server instances, terabytes of data and thousands of users, cloud automation becomes a must.
Put together, all of these benefits put businesses in a stronger position to build value. Instead of wasting time and resources managing cloud environments by hand, organisations that leverage cloud automation are able to focus their resources on activities that deliver direct business benefits, like developing new services and keeping customers pleased. They can also quickly deploy or modify their IT assets whenever necessary in order to support a new business initiative.

Cloud Automation and Mature DevOps
Cloud automation and DevOps are distinct concepts. Technically speaking, it’s possible to do one without the other.

In practice, however, cloud automation and DevOps typically go hand-in-hand. And if you want to reach DevOps maturity, cloud automation is an absolutely essential step.

Before we get into why that is, let’s remind ourselves of the critical role automation plays in DevOps.

DevOps Automation
DevOps places enormous emphasis on automation.

DevOps relies on practices including automated infrastructure-as-code, continuous delivery and tight feedback loops – all of which are dependent on automation.

Automation is critical not only for reducing the complexity and variability of your tech stack and infrastructure, but then subsequently scaling these across the business in a sustainable, repeatable fashion.

From a DevOps perspective then, automation (in general) focuses primarily on application development and delivery, which is distinct in most respects from cloud management.

So what about cloud automation?

DevOps and Cloud Automation
Because many application delivery pipelines feed into cloud-based production environments, being able to automate cloud management is crucial for building the type of reliable and efficient application delivery pipeline that DevOps prioritizes.

How so?

Cloud automation enables the following:

Continuous Improvement

Cloud automation can help provide the consistent feedback that is essential for achieving the continuous improvement goals associated with DevOps.

By automatically collecting and sharing data about your cloud environment, your team is in a better position to identify and act on opportunities to improve.

Self-Service Visibility

The templates associated with cloud automation tools provide a level of consistent visibility that benefits all members of the DevOps team.

For example, if a developer wants to know how a production environment is configured, a quick look at the IaC templates that govern that environment will yield the answer.

Because DevOps places a priority on communication and transparency across technical teams, this type of self-service visibility is highly valuable.

In Summary
Although cloud automation and DevOps automation each focus on different types of processes and resources, they reinforce each other in ways that make them inseparable.

That’s especially true for any team that wishes to put DevOps principles into practice at scale in a large, fast-moving cloud environment. Although, again, it may technically be possible to build a well-automated CI/CD pipeline without also using cloud automation tools, or to automate your cloud without also having an automated CI/CD process in place, doing so in practice is almost unimaginable.

So if you’re looking to build a more efficient DevOps/CloudOps pipeline, cloud automation is a great place to start.

Automations can be put in place to respond to identified needs or opportunities for optimization at some point (or points) in the pipeline. However, these automations may be disparate themselves or not coordinated from an overall perspective.

That’s where cloud orchestration comes in.

What Is Cloud Orchestration?
Cloud orchestration is automation for all your disparate automations, across separate services and separate clouds. It paints the picture of cloud automation services overall.

Yes, all your automations can be coordinated (and automated) from a higher level!

Cloud orchestration allows you to create an automation environment across the enterprise that coordinates more teams, functions, cloud services, security and compliance activities, for repeatable end-to-end automated processes–sky-rocketing productivity and throughput, and eliminating costly mistakes. It commonly outlines particular workflows by the series of steps involved, timelines if necessary, and tasks such as manual sign-offs if required.

Let’s take a look at some examples.

Example 1: Repeatable cloud test infrastructure
An example of cloud orchestration may be spinning up a fully functional test environment, running all software tests, then reporting and shutting down infrastructure upon completion.

When used for similar projects, this kind of cloud orchestration template can be efficient, repeatable, and save cloud resources, as it terminates all processes itself.

Example 2: Location-dependent security policies
Your business opens a new branch in a new country, with similar teams in place. You want to spin up a similar cloud environment to what your team uses at your main site, but with location-specific security policies.

You utilise the infrastructure-as-code architecture you used for your main site, but layer location-dependant security over the top, which can then be tweaked if you open a branch in another country.

Example 3: Triple redundancy systems
You have mission critical systems in the cloud. While you’ve built in redundancy into your AWS implementation, you’re still concerned about relying on one vendor’s platform completely.

You build a Microsoft Azure implementation that can be switched on automatically in the case of systems going down on AWS.

Why Cloud Orchestration?
Put simply, cloud orchestration brings together a series of lower-level automations, again through infrastructure as code, and for the enterprise environment it’s a must moving forward.

There are simply too many cloud automations to manage on a case-by-case or team-by-team basis. You need to get the bigger picture, and more importantly, you need to be able to effectively manage the bigger picture.

As with cloud automation, there are cloud orchestration tools to help you perform this complex task. Industry leader Terraform (which we’ve talked about previously) and AWS CloudFormation offer complete orchestration, with built-in support for common cloud automation services, and they also utilise the infrastructure-as-code paradigm.

As we’ve seen, cloud automation and cloud orchestration reinforce each other and often feature within the same conversations. So what’s the difference really?

Cloud Automation Vs. Cloud Orchestration
For most teams today, it makes sense to take advantage of cloud automation and cloud orchestration at the same time. However, these are distinct concepts that are driven primarily by different tools.

The key difference between cloud automation and cloud orchestration is that cloud automation focuses on automating individual types of processes. In contrast, cloud orchestration automates entire workflows, which are themselves composed of various individual processes.

Cloud Automation Use Case:
A cloud automation process might allow you to install an operating system on a server.
Another cloud automation process could configure the network for that server.
A third could set up IAM policies that define who can log into the server.
Cloud Orchestration Use Case:
A cloud orchestration solution would combine these three distinct tasks into a single workflow that automates all aspects of the server’s setup.

In essence, then, you could think of cloud automation as a sub-category of cloud orchestration, or as a building block for it. You can do cloud automation without cloud orchestration, but you can’t have cloud orchestration without cloud automation.

Cloud Automation Tools
The market surrounding cloud automation tools is vast. At a high level, however, it can be broken into two categories:

1. Tools and Services Built Into Public Cloud Platforms
Examples: AWS CloudFormation, Azure Resource Manager
Advantage: These tools offer the highest level of integration with their respective platforms.
Disadvantage: Their main drawback is that, in general, they support only the clouds of which they are a part. You can’t take an AWS CloudFormation template and directly apply it to an Azure environment, for example.
2. Tools From Independent Vendors
Examples: HashiCorp Terraform, Puppet, Ansible, Chef and Salt. Most of these solutions are open source in their core form, although many of them serve as the basis for commercial editions, too.
Advantage: In general, all of these solutions will work with any type of public, private or hybrid cloud platform.
Disadvantage: These tools inherently have a lag in implementing functionality when a cloud provider introduces a new feature or product.
Conclusion
In summary, there are several key takeaways to bear in mind about cloud automation:

It’s a must-have for any large-scale cloud environment.
DevOps and cloud automation go hand-in-hand, and it’s very difficult to do one without the other at scale.
Cloud orchestration relies on, but is different from, cloud automation. Cloud orchestration also remains a hazier concept.
The main differences between cloud automation tools include whether they support only one or multiple clouds, and whether they are available in free and open source form or only as paid products.
Finally and most significantly, cloud automation is the only way to leverage the most value out of cloud environments.

By automating management tasks that would otherwise consume tremendous time and resources, cloud automation empowers organisations to update their cloud environments more quickly in response to business challenges.

In turn, it breeds a greater ability to react to changing business conditions (like the need for more or fewer virtual machine instances, or the addition of new users to a cloud application) by modifying IT configurations accordingly.

 

Categories
Ransomware Sophos Threats

Sophos Discovers New Memento Ransomware

Memento Ransomware Locked Files in a Password-Protected Archive When it Couldn’t Encrypt the Data and Demands $1 Million in Bitcoin

Sophos Discovers Memento Ransomware – Sophos, a global leader in next-generation cybersecurity, has released details of a new Python ransomware called Memento. The research, “New Ransomware Actor Uses Password Protected Archives to Bypass Encryption Protection,” describes the attack, which locks files in a password-protected archive if the Memento ransomware can’t encrypt the targeted data.

“Human-led ransomware attacks in the real world are rarely clear cut and linear,” said Sean Gallagher, senior threat researcher at Sophos. “Attackers seize opportunities when they find them or make mistakes, and then change tactics ‘on-the-fly.’ If they can make it into a target’s network, they won’t want to leave empty handed. The Memento attack is a good example of this, and it serves as a critical reminder to use defense-in-depth security. Being able to detect ransomware and attempted encryption is vital, but it’s also important to have security technologies that can alert IT managers to other, unexpected, activity such as lateral movement.”

Attack Timeline

Sophos researchers believe the Memento operators breached the target’s network in mid-April 2021. The attackers exploited a flaw in VMware’s vSphere, an internet facing cloud computing virtualization tool, to gain a foothold on a server. The forensic evidence Sophos researchers found indicates the attackers started the main intrusion in early May 2021.

The attackers used the early months for lateral movement and reconnaissance, using the Remote Desktop Protocol (RDP), NMAP network scanner, Advanced Port Scanner, and Plink Secure Shell (SSH) tunneling tool to set up an interactive connection with the breached server. The attackers also used mimikatz to harvest account credentials to use in later stages of the attack.

According to Sophos researchers, on Oct. 20, 2021, the attackers used the legitimate tool WinRAR to compress a collection of files and exfiltrate them via RDP.

“Ransomware is one of the most growing cyber threats, being one of the biggest concerns of customers around the globe.” says Douglas Bernardini, Cyber Security Specialist and Cloud Computing Expert.

Release of the Ransomware

The attacker first deployed the ransomware on Oct. 23, 2021. Sophos researchers found that the attackers initially tried to directly encrypt files, but security measures blocked this attempt. The attackers then changed tactics, re-tooled and re-deployed the ransomware. They copied unencrypted files into password-protected archives using a renamed free version of WinRaR, before encrypting the password and deleting the original files.

The attackers demanded a ransom of $1 million in bitcoin in order to restore the files. Fortunately, the target was able to recover data without the involvement of the attackers.

Open Entry Points Let in Additional Attackers

While the Memento attackers were in the target’s network, two different attackers broke in via the same vulnerable access point, using similar exploits. These attackers each dropped cryptocurrency miners onto the same compromised server. One of them installed an XMR cryptominer on May 18, while the other installed an XMRig cryptominer on Sept. 8 and again on Oct. 3.

“We’ve seen this repeatedly – when internet-facing vulnerabilities become public and go unpatched, multiple attackers will quickly exploit them. The longer vulnerabilities go unmitigated, the more attackers they attract,” said Gallagher. “Cybercriminals are continuously scanning the internet for vulnerable online entry points, and they don’t wait in line when they find one. Being breached by multiple attackers compounds disruption and recovery time for victims. It also makes it harder for forensic investigations to unpick and resolve who did what, which is important intelligence for threat responders to collect to help organizations prevent additional repeat attacks.”

Security Advice

Sophos believes this incident, where multiple attackers exploited a single unpatched server exposed to the internet, highlights the importance of quickly applying patches and checking with third-party integrators, contract developers or service providers about their software security.

Sophos also recommends the following general best practices to help defend against ransomware and related cyberattacks:

At a Strategic Level

  • Deploy layered protection. As more ransomware attacks begin to involve extortion, backups remain necessary, but insufficient. It is more important than ever to keep adversaries out in the first place, or to detect them quickly, before they cause harm. Use layered protection to block and detect attackers at as many points as possible across an estate
  • Combine human experts and anti-ransomware technology. The key to stopping ransomware is defense-in-depth that combines dedicated anti-ransomware technology and human-led threat hunting. Technology provides the scale and automation an organization needs, while human experts are best able to detect the tell-tale tactics, techniques and procedures that indicate an attacker is attempting to get into the environment. If organizations don’t have the skills in house, they can enlist support from cybersecurity specialists

At a Day-to-Day Tactical Level

  • Monitor and respond to alerts. Ensure the appropriate tools, processes, and resources (people) are available to monitor, investigate and respond to threats seen in the environment. Ransomware attackers often time their strike during off-peak hours, at weekends or during the holidays, on the assumption that few or no staff are watching
  • Set and enforce strong passwords. Strong passwords serve as one of the first lines of defense. Passwords should be unique or complex and never re-used. This is easier to accomplish with a password manager that can store staff credentials
  • Use Multi Factor Authentication (MFA). Even strong passwords can be compromised. Any form of multifactor authentication is better than none for securing access to critical resources such as e-mail, remote management tools and network assets
  • Lock down accessible services. Perform network scans from the outside and identify and lock down the ports commonly used by VNC, RDP, or other remote access tools. If a machine needs to be reachable using a remote management tool, put that tool behind a VPN or zero-trust network access solution that uses MFA as part of its login
  • Practice segmentation and zero-trust. Separate critical servers from each other and from workstations by putting them into separate VLANs as you work towards a zero-trust network model
  • Make offline backups of information and applications. Keep backups up to date, ensure their recoverability and keep a copy offline
  • Inventory your assets and accounts. Unknown, unprotected and unpatched devices in the network increase risk and create a situation where malicious activities could pass unnoticed. It is vital to have a current inventory of all connected compute instances. Use network scans, IaaS tools, and physical checks to locate and catalog them, and install endpoint protection software on any machines that lack protection
  • Make sure security products are correctly configured. Under-protected systems and devices are vulnerable too. It is important that you ensure security solutions are configured properly and to check and, where necessary, validate and update security policies regularly. New security features are not always enabled automatically. Don’t disable tamper protection or create broad detection exclusions as doing so will make an attacker’s job easier
  • Audit Active Directory (AD). Conduct regular audits on all accounts in AD, ensuring that none have more access than is needed for their purpose. Disable accounts for departing employees as soon as they leave the company
  • Patch everything. Keep Windows and other operating systems and software up to date. This also means double checking that patches have been installed correctly and are in place for critical systems like internet-facing machines or domain controllers

Sophos endpoint products, such as Intercept X, protect users by detecting the actions and behaviors of ransomware and other attacks. The act of attempting to encrypt files is blocked by the CryptoGuard feature. Integrated endpoint detection and response, including Sophos Extended Detection and Response (XDR), can help capture nefarious activities, such as when attackers create password-protected archives like those used in the Memento ransomware attack.

To learn more, please read the Memento ransomware article on SophosLabs Uncut.

Additional resources

  • To learn more about evolving cyberthreats, including ransomware and cryptominers and what they mean for IT security in 2022, read the Sophos 2022 Threat Report
  • Tactics, techniques, and procedures (TTPs) and more for different types of threats are available on SophosLab Uncut, which provides Sophos’ latest threat intelligence
  • Information on attacker behaviors, incident reports and advice for security operations professionals is available on Sophos News SecOps
  • Learn more about Sophos’ Rapid Response service that contains, neutralizes and investigates attacks 24/7
  • The four top tips for responding to a security incident from Sophos Rapid Response and the Managed Threat Response Team
  • Read the latest security news and views on Sophos’ award-winning news website Naked Security and on Sophos News

About Sophos

Sophos is a worldwide leader in next-generation cybersecurity, protecting more than 500,000 organizations and millions of consumers in more than 150 countries from today’s most advanced cyberthreats. Powered by threat intelligence, AI and machine learning from SophosLabs and SophosAI, Sophos delivers a broad portfolio of advanced products and services to secure users, networks and endpoints against ransomware, malware, exploits, phishing and the wide range of other cyberattacks. Sophos provides a single integrated cloud-based management console, Sophos Central – the centerpiece of an adaptive cybersecurity ecosystem that features a centralized data lake that leverages a rich set of open APIs available to customers, partners, developers, and other cybersecurity vendors. Sophos sells its products and services through reseller partners and managed service providers (MSPs) worldwide. Sophos is headquartered in Oxford, U.K. More information is available at www.sophos.com.

Source: https://www.globenewswire.com/news-release/2021/11/18/2337364/0/en/Sophos-Discovers-New-Memento-Ransomware.html

See also: SOPHOS 2021 THREAT REPORT

Categories
ProofPoint Security

Proofpoint Wins Three Categories at 2021 CISO Choice Awards

Cybersecurity Leader Named Premier Security Company for second straight year; Also Finishes First in Email Security, Cloud Security categories as determined by Board of CISO Judges

Proofpoint, Inc., a leading cybersecurity and compliance company, today announced it took top honors in three categories at the 2021 CISO Choice Awards including Premier Security Company for the second straight year. Proofpoint also won the categories of best Email Security and Cloud Security solutions.

A first of its kind vendor recognition selected by a CISO Board of Judges – leading security executives across industries – the CISO Choice Awards is a buyer’s guide for their peers when selecting the technologies used to safeguard their organizations. Now in its second year, the awards honor security vendors of all sizes, types, and maturity levels, recognizing differentiated solutions valuable to the CISO and enterprise from security solution providers worldwide.

“Proofpoint is honored to receive top honors by the CISO Choice Awards Board of Judges in three different categories,” said Ryan Kalember, EVP of Cybersecurity Strategy, Proofpoint. “As real-life CISOs applying real-world conditions, the judges understand that today’s attacks target people, not networks. Deploying a layered, people-centric approach to cybersecurity that includes security awareness training and integrated threat protection as found in our Email Security and Cloud Security solutions is crucial for stopping and remediating threats.”

“I would like to congratulate the winners of the 2021 CISO Choice Awards. It was an extremely competitive playing field with a record number of submissions,” said Aimee Rhodes, CEO of CISOs Connect: “It was exciting to hear the judges – who live and breathe security – share their experiences and discuss with one another the wealth of technologies that are on the market or coming to the market. Nothing can replace the real-word insights that the CISO judges bring to the table when deciding on the top vendors. Kudos again to the winners.”

Deployed as a cloud service or on premises, Proofpoint Threat Protection Platform uses multilayered detection techniques coupled with reputation and content analysis to identify and block a wide range of email-based threats. These threats include email fraud and hybrid attacks that leverage both cloud and email vectors. With Proofpoint’s integrated platform, organizations can obtain actionable insight into threats, enable users to identify and report on suspicious messages, and accelerate threat response by automating threat investigation and remediation process.

“One of the most sensitive layers within cybersecurity is people. Proofpoint is recognized for its solutions that meet this front.” says Douglas Bernardini, Cyber Security Specialist and Cloud Computing Expert.

For more information on Proofpoint Email Security, please visit: https://www.proofpoint.com/us/products/email-security-and-protection

For more on Proofpoint’s Cloud Security Platform, please visit: https://www.proofpoint.com/us/products/cloud-security

About Proofpoint, Inc.

Proofpoint, Inc. is a leading cybersecurity and compliance company that protects organizations’ greatest assets and biggest risks: their people. With an integrated suite of cloud-based solutions, Proofpoint helps companies around the world stop targeted threats, safeguard their data, and make their users more resilient against cyberattacks. Leading organizations of all sizes, including more than half of the Fortune 1000, rely on Proofpoint for people-centric security and compliance solutions that mitigate their most critical risks across email, the cloud, social media, and the web. More information is available at www.proofpoint.com.

Source: https://www.globenewswire.com/news-release/2021/10/27/2321707/35374/en/Proofpoint-Wins-Three-Categories-at-2021-CISO-Choice-Awards.html

See also: Proofpoint for Continuous Diagnostics and Mitigation

Categories
Gartner Imperva Magic Quadrant WAAP WAP Web Application Protection

Imperva An Eight-Time Magic Quadrant Leader for Web Application and API Protection

2021 has seen a lot of change. Billionaires now go where only governments and Red Bull gimmicks could go before. The 2020 Olympics didn’t take place in 2020. Tom Brady won his 7th Super Bowl for a completely new franchise [those of you in the US get this reference]. Similar change in application security has now been defined by an annual report with a new name.

Gartner® published the 2021 Magic Quadrant™ for Web Application and API Protection and, despite the new name and expanded scope, Imperva has been named a Leader and rated highest for Completeness of Vision consistently throughout.

Imperva’s vision is to protect all applications for hybrid enterprises

If you picture an application 8 years ago, what you see is not complex: a very large piece of software running on vSphere in a leased data center. APIs were an innovative tool for tiny start-ups [I remember talking to my development team about the advantages of SOAP and why it was too soon to go to REST]. Amazon Web Services was just starting to offer a certification program for engineers. Clearly, 8 years is a very long time in application development time.

And yet, while so much has changed in 8 years, many web applications today are still versions of what was built then. It takes a great deal of methodical planning to properly migrate to cloud-native technologies, such as serverless functions, and gradual investments to effectively architect applications with RESTful and GraphQL APIs. For years, Imperva has continually focused on providing security for organizations in this transition, and the vast majority of them have a mix of legacy and modern across a hybrid environment. This is a key reason why we continue to invest in Web Application and API Protection that our customers can deploy in a variety of ways, from appliances in data centers to SaaS to natively deployed in AWS, Microsoft Azure, and Google Cloud Platform (GCP).

But you cannot protect all of a modern organization simply by adapting the protection they already use — it takes innovative approaches to secure what now comprises the majority of all traffic: APIs. Imperva protected our customers’ APIs prior to 2021, but this year, it became a top priority. A few months ago, we added the ability for customers to discover the APIs receiving traffic outside the view of the security team. And to ensure our customers can continue their modernization, we acquired CloudVector for advanced API security protecting high-scale businesses, but more importantly, for the expertise in the team. Effectively protecting APIs requires a deep understanding of how development operations work and how much it differs from the application development of 8 years ago.

“Imperva is a constant company, strong in market share and with solid solutions.” says Douglas Bernardini, Cyber Security Specialist and Cloud Computing Expert.

If you want to learn more about Imperva’s approach, please view the recorded session with Lebin Cheng, Head of API Security, and Peter Klimek, Office of CTO, here.

Imperva Eight Time Leader – Imperva recognizes the industry needs beyond 2022

To handle all of this change, we believe we have the industry’s best approach to protecting our customers from innovative attacks, and thank Gartner for this report’s recognition. Not every application security vendor has our track record of rapidly integrating the technology of their acquisitions, most recently with how the advanced bot management capabilities from Distil Networks were available to Imperva customers in under a year. We look forward to the 2022 report, once Gartner and the broader market have seen what we will accomplish with the CloudVector team guiding the way.

To download the report, visit here.

To immediately start a free trial of our market-leading Cloud WAAP platform, visit our free trial site.

Gartner, “Magic Quadrant for Web Application and API Protection”; Jeremy D’Hoinne, Rajpreet Kaur, John Watts, Adam Hils, Shilpi Handa; September 20, 2021.

The report was earlier named as Magic Quadrant for Web Application Firewalls until 2020. Gartner and Magic Quadrant are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Imperva.

Source: https://securityboulevard.com/2021/09/imperva-an-eight-time-magic-quadrant-leader-for-web-application-and-api-protection/

See also: Capability Brief – WAF Gateway

Categories
Akamai Firewall WAP Web Application Protection

Akamai announces Future of Life Online Challenge, awarding digital innovators $1 million in services

Challenge will award up to four visionary companies with a total of $1 million in Akamai security, content delivery, and/or edge compute solutions

Akamai Technologies, Inc. (NASDAQ: AKAM), the world’s most trusted solution to power and protect digital experiences, today announces the launch of its Future of Life Online Challenge, celebrating and rewarding “the visionaries, the rebels, and the insanely curious innovators” shaping breakthrough online experiences. The challenge will award up to four winners an equal share of up to $1 million worth of Akamai security, content delivery and edge compute solutions and showcase their achievements in a special online docuseries.

“For more than 20 years, Akamai has devoted itself to powering and protecting the digital experiences that create online life as we know it. When done well, great online experiences elevate the entire human experience, so we want to empower those innovators who are defining our digital future,” said Robert Blumofe, Executive vice president and CTO, Akamai Technologies. “The Future of Life Online Challenge is designed to help groundbreaking companies take their solutions to the next level and shine a spotlight on their achievements, inspiring others to develop their own big ideas that will create the future of life online.”

To enter, companies must have an innovative, viable product or service in the market that needs support in scaling digital security, web performance, and/or digital delivery to achieve its full potential. The challenge will be conducted in two rounds:

  • For Round One, companies must submit a brief video describing their product or solution and how it delivers value to customers or society. Round One applications with videos must be submitted by February 18, 2022, at 5 PM ET. Finalists from Round One will be announced March 11, 2022.
  • For Round Two, finalists must submit a business proposal, not exceeding seven pages, describing their customer segments, value proposition, distribution/sales channels, market size, growth plans, and sustainability focus. They need to attend a virtual conference to pitch their proposal and answer questions. Round Two proposals must be received by May 13, 2022, at 5 PM ET.

Proposals will be judged according to the novelty of the idea, the market viability, and the customer benefit. The Challenge winners will be announced on June 06, 2022.

“It is an incentive to innovation, which will positively impact the cyber defense market.” says Douglas Bernardini, Cyber Security Specialist and Cloud Computing Expert.

Challenge entries must be submitted via the online application at www.futureoflifeonline.com. To qualify, companies may not be an existing direct or indirect customer of Akamai or of any subsidiary, affiliate, or channel partner of Akamai. For additional qualification requirements and the complete terms and conditions for the Challenge, please visit www.futureoflifeonline.com.

About Akamai
Akamai powers and protects life online. The most innovative companies worldwide choose Akamai to secure and deliver their digital experiences – helping billions of people live, work, and play every day. With the world’s largest and most trusted edge platform, Akamai keeps apps, code, and experiences closer to users – and threats farther away. Learn more about Akamai’s security, content delivery, and edge compute products and services at www.akamai.comblogs.akamai.com, or follow Akamai Technologies on Twitter and LinkedIn.

Source: https://www.prnewswire.com/news-releases/akamai-announces-future-of-life-online-challenge-awarding-digital-innovators-1-million-in-services-301424376.html

See also: Akamai – Web Application Protector