Blog

5 Microservices Trends in 2018

Posted by Jeff Pelliccio on Apr 9, 2018 9:00:00 AM

In ICS insights

In 2017, a number of ecosystems came into play that changed DevOps, and there are now more players with Cloud Native Computing Foundation (CNCF) projects tripling. In the coming year, further advances and changes will likely accelerate the market further. In this article, we look at the 2018 microservices trends, including:

  • service meshes
  • GraphQL
  • container-native security
  • event-driven architectures
  • chaos engineering

We plan to keep an eye on these trends and companies that apply them to business use cases in the coming year. 

1. Service Meshes are In Demand

You use a service mesh when you need a dedicated infrastructure layer during service-to-service communication. For anyone building a cloud native application, a service mesh makes communication fast, safe, and reliable.

Service meshes are getting a ton of buzz in the cloud-native category. Containers are becoming more prevalent, so service topologies can be more dynamic, but require advances in network functionality. Service meshes manage traffic via a service discovery, load balancing, and routing platform. It checks and monitors the health of the system. They make it possible to tame unfathomable container complexity.

You can leverage service meshes for chaos engineering, which is the art of rigging a distributed system to build confidence in the system’s robust processing capability. Although there isn't widespread deployment yet, load balancers such as traefiHAProxy and NGINX have positioned themselves as data planes for this new technology. There are businesses that use service meshes in their production environments. Keep in mind that service meshes aren't just a microservices or Kubernetes innovation. They can be applied to Virtual Machines and serverless distributed environments. For instance, the  National Center for Biotechnology Information uses Linkerd instead of running containers.

Istio and Buoyant’s Linkerd showcase the best-known offerings so far. In fact, Buoyant has released Conduit v0.1 as an open-source service mesh for Kubernetes.

2. Event-Driven Architectures

Agility is still foremost in the minds of business decisionmakers.  A “push” refers to an event-based architecture where one service sends a message while observer containers wait for it. This is accomplished by running logic asynchronously. This is very different from request-response architectures because event-driven systems are not functionally dependent on downstream processes. This creates an advantage for developers, who can now act more independently while developing their services.

Developers now build architecture containers that can be executed by specified events. Function-as-a-Service (FaaS) uses this capability. In FaaS, functions are stored as text within the database until a trigger occurs. When your code calls the function, an API controller gets the message and passes it on to a message bus via a load balancer. The message bus queues messages for an invoker container. Once the message executes, the data is stored. At this point, the user gets the result, and the function sleeps until you trigger it again.

Benefits of FaaS:

  • Shortens the time between writing and running code thanks to artifact-free creation and push 
  • Decreases your overhead by managing and scaling functions via FaaS platforms such as AWS Lambda.

Note that FaaS has its own challenges. It requires that you unlink each piece of an executed services. This leads to myriad functions that are difficult to find, monitor, manage, and manipulate. Also, without visibility of the dependencies, it's really hard to debug FaaS systems.

Currently, FaaS is not your best bet for processes use longer invocations, processes that have a huge amount of data in memory or consistent performance. Most applications of FaaS are used in temporal events or background jobs. User implementation will grow as the storage layer becomes more robust and the corresponding platforms enhance in performance.

In 2017, CNCF conducted a survey of 550 people. Of these, 31 percent utilize serverless tech, while 28 percent planned to use it within 1.5 years. The survey captured follow up details that revealed 77 percent of respondents using serverless tech (169) use AWS Lambda. This isn't surprising since Lambda is the leading serverless platform with the highest marketshare. This hints at untapped opportunities in the space. Edge compute promises powerful IoT and AR/VR utility. This is definitely a trend worth following, especially for those working in the smart gadget market.

3. Security Changes for the New Tech

By default, applications that use containers have heightened security thanks to kernel access. For example in the Virtual Machine ecosystems, there is a single point of visibility, the virtual device driver. Now, let's consider a container environment where the operating system uses syscalls and semantics, defining a much more expressive signal. Before, operators achieved only a part of the signal capability when they dropped an agent into their VM. However, the methodology was complicated and prone to errors. Now, containers provide transparency and integration in the container environment is far less complex than in a VM.

451 Research conducted a survey that revealed that security was considered the largest obstacle to adopting container technology. At first, it was the vulnerability of the ecosystem that posed a security concern for applications utilizing container environments. On the other hand, there are now many canned containers within public registries, so energy was expended to ensure better security online. This was resolved by image scan and authentication requirements.

Virtualized environments use hypervisor to access and control the kernel root, which in turn has access to each container on the kernel. Therefore, organizations have to control how and when containers communicate to the host. Certain containers are allowed to execute certain actions when called by the system. You can harden the host to make sure that namespaces and cgroups are configured optimally to maintain security.

Further, traditional firewalls are reliant on IP address protocols to establish and confirm system security. This technique cannot be scoped out to apply to container environments due to dynamic triggers that reuse the IP. When a runtime threat is detected, it's critical that a response goes out for production environments. To do this, you have to fingerprint the actual container environment. When you construct a picture to be used for a behavioral baseline, it's possible to easily verify anomalous behavior from a bad actor. According to a 451 Research report, 52 percent of companies surveyed use containers. This demonstrates the increased reliability of the technology.

4. Moving to GraphQL from REST

Originally developed by Facebook in 2012, GraphQL was released as open source code in 2015. Basically, GraphQL acts as an API spec within a specialized query language, and the technology includes a runtime to execute the queries. This type of system lets you define data schemas and dynamically add new fields that can be aged with no effect on the existing queries. It also doesn't require a restructure on the client. GraphQL is can be used without adhering to any particular database or storage restrictions.

When you use a GraphQL server, it acts as a single HTTP endpoint expressing the full capabilities at its service. This ability to define the interaction of resources' field (not endpoints per REST) and types enables GraphQL to track references among properties. This means that services get data from multiple resources with the same query. As an alternative, API's using REST apps need to load multiple URLs with each query, which impacts network hops and performance. This slows down result retrievals. Alternately, GraphQL decreases the resources required for each request and returns the data faster, usually formatted via JSON.

Another benefit of GraphQL over REST is that it decouples the clients and servers, making it possible to maintain them separately. Also unique in comparison to REST, GraphQL has similar syntax when communicating with the clients and servers. This greatly simplifies debugging. The query shape actually matches the data you fetch from the server. So, GraphQL has become highly efficient compared with SQL or Gremlin. Thanks to simpler queries, you can establish a more stable process.

In November, Amazon validated GraphQL when it launched its AWS AppSync, featuring GraphQL support. Since then, developers from a myriad of applications have adopted the new platform. Those still curious about GraphQL will surely follow its growth as it evolves in the manner of gRPC or Twitch’s Twirp RPC platform.

5. Chaos Engineering

Popularized by Netflix, and then adopted by Google, Amazon, Facebook, and Microsoft, chaos engineering is an experiment that revolutionized the ability to improve predictive behavior modeling. Chaos engineering has become robust in the past decade.

Chaos Monkeys began the movement by turning off production services and scaling out by using Failure Injection Testing (FIT) and Chaos Kong for its larger environments. On the surface, chaos engineering seems to be about using turmoil to upset traditional methodologies. However, it's not just fashioned to "break" systems via stress testing. Instead, chaos engineering evokes a broader scope, a new element to development. Besides injecting failures, it can introduce traffic spikes and unusual requests to predict and fix potential production issues.

Beyond verifying assumptions or debunking them, it reveals new properties. There is something incredible about the way chaos can be used to uncover and fix system weaknesses and help improve the predictably and reliability of the system. This increases resiliency and ensures better service to your customers. 

Other developments, such as neural networks and deep learning are so complex they surpass the understanding of their human makers. Chaos engineering can help bridge the gap by conducting holistic testing. You can expect that this will become the accepted practice as developers struggle to make their increasingly complex systems more reliable. When chaos engineering is mainstreamed, it will materialize in existing open source projects, including commercial offerings and service meshes.

These and other microservices trends are making web-based computing more secure and efficient, and there are even more revolutionary changes on the horizon.

Keep These Trends in Mind

If you are currently looking for a job or are considering making a move, remember that a key understanding of your industry and relevant technology is important during an interviewContact ICS to have someone in your back pocket to help you through the job search. Knowing these trends can make you a better candidate and employee. Your ability to anticipate the coming trends will allow you to bring success to your company. 

Search Jobs