We’re constantly collecting feedback from our users in an effort to improve your favorite performance and load testing software.
Here are a few items we’ve recently pushed live into our app for your convenience.
We’re constantly collecting feedback from our users in an effort to improve your favorite performance and load testing software.
Here are a few items we’ve recently pushed live into our app for your convenience.
In the first installment of this mini-series, we outlined how you efficiently prepare for performance testing. In the previous article, we got into the nitty gritty of what tests to run, and how to run them.
The final step toward ensuring you’re shipping high-performance applications and websites is to continuously test throughout your software development lifecycle.
If you’re a DevOps-minded organization that’s already working with Continuous Delivery and Continuous Integration (CD/CI) tools, then you’re off to a great start.
At Load Impact, we build a tool that helps you understand and continuously keep track of your application’s performance at varying levels of traffic.
Our software does this by simulating virtual users interacting with your application. (It's pretty cool, if you ask us and thousands of our users )
Simply put: Load Impact is a performance testing service.
I have worked for Load Impact since its founding, and I’m going to share how we use the tool ourselves.
Tl;dr — More Load Impact customers than ever are continuously running load tests. In response, we’ve just shipped our “Performance Thresholds” feature. Now, engineers set Pass/Fail metrics for their scheduled tests in order to hone in on the data that’s most important to them. Here’s how we think that helps you:
In the fast-moving world of software development, it’s important that engineers stay organized and maintain transparency in their work.
Containers. Containers Everywhere. (Photo Courtesy of Flickr)
Here at Load Impact, we're focused on enabling DevOps teams to create more resilient software with massive scalability. We have seen the use of container technology enable DevOps teams to move faster, test more often and create higher quality software in recent years.
Shannon Williams is the co-founder and VP of Sales and Marketing for Rancher Labs, a software platform for deploying a private container service. The company has also developed RancherOS, a minimalist OS built explicitly to run on Docker.
Rancher Labs prides itself on developing the next generation of cloud software, and their innovative technology enables customers to fully realize the benefits of containers in a scalable production environment.
Note: Load Impact contributing writer and consultant Peter Cannell conducted this Q-and-A. He's previously worked with Shannon at other startups.
Q: Where would you say the industry is in terms of adoption of DevOps today?
A: That is a great question and a fun question because it's changing as we speak. Five years ago we were talking to customers about deploying applications in the cloud and we got alot of blank stares. People would literally point to their ITIL volumes on the shelf. Customers had spent the previous 10 years getting to this point where applications ran reliably, but with this came the era of change control.
Fast forward to today and you see banks that have DevOps teams, healthcare organizations that are responding to the backlash of the business to be more agile. These business have to respond to competitors that are able to roll out powerful technology in a fraction of the time, all while giving users amazing experiences.
The pendulum has swung from the change control era to the golden age of DevOps, or DevOps 2.0. The emergence of containers, cloud services and microservices has allowed for rapid-fire upgrades to applications and is a testament to what is happening out there today.
I'm always excited when I talk to teams using Rancher who have been given a mandate by leadership in operations or even the CIO to drive agility. From our perspective it's a brilliant time to be involved in DevOps as the broad majority of customers build, test and deploy software better than before.
Q: I couldn't agree more. It's great to hear your confirmation of the same dynamics we are seeing in the marketplace.
A: What is amazing is the people in these DevOps roles used to be viewed working on the least sexy part of IT. Now all of a sudden these teams are a strategic advantage. There is an enormous increase in salaries and demand for people that understand application lifecycles and operations today.
Q: Is it that we are getting just faster at deploying features today or is the overall quality better as well?
A: Good question. I think it's both, but it's not uniform across the customer base. Moving infrastructure as a service and Continuous Integration doesn't mean that everything becomes easy. There are plenty of pitfalls and growing pains and political resistance. Fundamentally we are seeing better quality as people design applications and infrastructure for failure. Additionally, so many new tools have been developed such as automated load testing like Load Impact is doing, automated inspection of elements and monitoring.
Companies are rethinking logging, monitoring and implement testing to be as dynamic as the rest of what they are doing — the net outcome is a dramatic improvement in both quality and quantity of code. Not only does feature function accelerate, but application stability and resilience improves as well.
I see this with all the customers I work with — it's not as if the infrastructure is better, servers and networks still fail — but the designing for failure, building for resilience has been ingrained in developers now.
Q: I love the concept of resilience. What we see is, and tell me if you agree, is development teams are running more tests and earlier in the development cycle than ever before. Developers are running load and performance tests that used to be the domain of operations & networking. Are you seeing this dynamic as well?
A: We are absolutely seeing this! What is amazing is that with containers now, customers are bringing up a mirror of production in their development environment. They are testing feature functionality, systems tests throughout the development cycle on their own cluster. We enable this with Rancher as a container service for organizations adopting Docker. As Rancher is rolled out one thing we see immediately is better testing because development mirrors production. Upgrades get more reliable, frequency of updates improves dramatically.
Customers want that magic unicorn like experience of faster releases, better stability, more reliability. There is another aspect to this I want to add which is easier incorporation of more developers. The environment is more accessible to new developers and easier to build upon. As micro-services permeate application development the interdependencies become easier to understand and new developers can ramp up faster.
Q: Let's switch gears a little and touch on containers vs VMs in customer environment.
A: What is amazing is containers can't really take off without the infrastructure of virtual machines that is out there today. I don't see a fundamental conflict between containers and VMs. There are cases where a VM is only being used for automation and in those cases containers are a good replacement. Containers and VMs tend to be very compatible and with most Rancher (and likely Docker) customers are running on top of a VM. I don't think that VMs from a host-resource management perspective is going away anytime soon.
Q: Let's talk more about Rancher. Where would you typically see Rancher deployed in an organization? Is it running on laptops like you might see with Docker images?
A: No, Rancher is more of a cluster management component. Moving a container from a developer's laptop to multiple hosts in the datacenter or a cloud environment is where a container service like Rancher becomes really compelling. The value of Rancher comes in when you are moving an application through the DevOps lifecycle. It is an orchestration and automation platform tied together with infrastructure services that are specific to containers.
With containers you have portability of the image making them ubiquitously deployable on linux, on windows (in the near future), you have this new component that can run anywhere. Fundamentally you still have the networking, storage, all the elements surrounding the container are very different from cloud to cloud, host to host, etc.
Rancher implements is constant storage & networking around the containers. By creating "micro-SDNs" between containers and deploying consistent storage services attached to containers (and can be ported between environments) and organization gains a layer of computing that runs identically on any infrastructure.
The same way a container runs identically on your laptop and in the cloud, an entire application blueprint can be created and not only will the containers run properly but the networking & storage will as well. Without having to do integration to the underlying cloud service you get orchestration, management and even load balancing that is consistent. The real value here is complete portability of an entire service.
Q: Who are the typical users of Rancher within a large enterprise today?
A: Today Rancher is open source (and in beta) and we have tens of thousands of downloads. We also have over 1000 companies who have formally joined the beta program. What we find is that DevOps, Cloud architects and development teams (that own applications) are who gets the most excited about Rancher within an organization.
Once Rancher is in place and they have deployed an application it tends to get shared. The platform is very collaborative and is designed to be multi-tenant and quickly it expands. The consumer of Rancher can be anyone from a point-focused project all the way to a priority for an entire IT department looking to deploy containers enterprise-wide.
Q: Let's step back in time a bit and think back to our days at Teros and application security. We haven't talked about security much on this call and in the past security has been viewed as slowing things down, slow to adopt new changes and not really any of the things we have talked about.
Where do you see security being incorporated in these new environments?
A: I think it depends on the organization. Customers want to understand how we isolate environments, how are we handling encryption, network traffic. Key & secret management is an important part of running an application as well.
It think security, like so many other things, is a reflection of the size and maturity of the organization. For organizations that have aggressively adopted DevOps, security has joined the party. Those teams are seeing this as a way to accelerate innovation and instead of resisting change they are deploying new tools to enhance security.
In organizations where DevOps is a newer phenomena security can be an issue. You may run into teams that are more traditional and tend to push back on the pace of change. That is a fight (resisting change) that security will always lose.
It's just a matter of time for security organizations who haven't adopted this pace of change to realize they are a critical part of this process. They need to be coming up with new solutions instead of resisting where the business is going.
Most security teams want to be part of this dynamic, faster changing world. Many organizations we talk to have either started to get security on board or are starting that process now. I don't think we will be talking about this problem in 3 or 4 years.
Q: I think we will see a number of new security innovations as these micro-services evolve. We are starting to see a new term, SecOps, emerge as well. I'm excited to see what this brings.
A: Being able to do things like distributing agents as micro-services to containers, inspecting the network and model behaviors, what containers talk to what containers, security will just get better. We are seeing the application of machine learning and big data to take all of this new data about applications and how things normally communicate and behave - and apply security.
A wink of the Load Impact eye to Shannon for taking the time for this interview and his market insight. This is an exciting time to be in the evolving world of DevOps and it is great to hear directly from one of the most innovative companies in this space.
Our event might even have these sweet balloons (Photo Courtesy of StockSnap.io)
We’ve told you all about our upcoming appearance at Velocity NY. Now we’re adding a post-Velocity event to our social calendar, and you’re invited!
We’ll provide the free pizza and craft beer — and you can come enjoy a night of networking and intelligent conversation on HTTP/2 and web performance.
The main event of the evening will be Load Impact founder Ragnar Lonn showing off HTTP/2 vs. HTTP/1.1: A Performance Analysis, an innovative application that helps web developers understand how their websites will perform on HTTP/2.
HTTP/2 vs. HTTP/1.1 gives you real insight into your website's performance
As for the event, here’s a quick rundown of the meetup’s agenda:
6:15 pm: Guests Arrive — Pizza and Beer!
6:30 pm: Introduction and The Future of Web Performance with Robin Gustafsson
6:45 pm: HTTP/2 vs. HTTP/1.1: A Performance Analysis with Ragnar Lonn
7:10 pm: Networking, and time to finish the leftover pizza and beer!
The meetup will be hosted by our friends at Betterment, the innovative investment platform built for the connected generation.
Each talk will be followed with a brief Q-and-A, and both presenters will be available after their talks to chat with guests.
If you want to get an earlier look at HTTP/2 vs. HTTP/1.1, check us out at Velocity NY, where Ragnar and HTTP/2 contributor Daniel Stenberg will be unveiling the findings from their study on the new protocol that promises better web performance. Register for Velocity NY with the promo code RAGNAR20 to get 20% off your pass.
IoT Central has grown to nearly 2,800 members in a little more than a year of existence
— Load Impact is constantly on the hunt for the best meetups and conferences around New York City. In this new blog series, NYC Tech Events, we talk about some of the tech-focused events around the city that feature great speakers and promote the sharing of ideas from professionals and hobbyists of all skill levels.
The Internet of Things has been a rapidly growing sector of the technology scene for quite a while, and it’s not showing any signs of slowing down.
Startups and massive companies alike are embracing IoT technology, and it’s no surprise that many NYC-based companies are innovating and helping create a connected world.
Similar to the popularity of IoT, the meetup group has experienced rapid growth in a little more than one year — exploding to nearly 2,800 members in that time.
“IoT Central’s mission is to inspire, engage and connect entrepreneurs, investors and professionals and enthusiasts worldwide,” said Golner.
The group typically meets once a month and has a few different types of programs — ranging from demo nights, to panel discussions, to networking events and anything else the community is interested in.
Golner said one of his favorite parts of the meetups has been the “member announcements” portion near the end of each event. That’s when the floor is open for attendees to make a quick announcement, which Golner says have been a good mix of job openings, quick pitches about a product or service and people announcing they are available for new job opportunities.
That kind of inclusion may be one of the many reasons why IoT Central has not only quickly grown to one of the 10 largest IoT meetups in the world, but it’s also comprised of very loyal members.
In December of 2014, Golner and IoT Central hosted the NYC IoT Fair. The event was a success, and it boasted around 500 attendees with 23 companies presenting. The fair was sponsored by leading tech companies, including Microsoft, Samsung, IBM, Verizon, Flextronics, Atmel, Indiegogo, wot.io and others.
Alongside the big-name sponsors and hundreds of attendees, Golner said there was a special detail leading up to the event that still sticks out to him.
“We needed about 30 volunteers for the Fair, so I asked members of the meetup if they would consider volunteering,” Golner said. “There were more than 100 people who offered to volunteer. I was blown away by the response.”
The fact that so many members of IoT Central are dedicated and passionate about the space has helped Golner pick topics, formats and help organize events.
“It’s a community-organized meetup,” Golner said. “There’s no doubt about that.”
What’s Next for IoT?
While a lot of people might think of connected cars, homes and other consumer uses for IoT, Golner said he’s most excited about the potential for advancements in industries like manufacturing, healthcare, Smart Agriculture and Smart Cities.
“IoT has the potential to impact almost every vertical,” Golner said. “While consumers are aware of connected products such as fitness and activity trackers and Smart Home products, the consumer-facing side is just one area in which IoT will impact our lives.”
As IoT continues to grow — both in the consumer world and the B2B world — IoT Central will definitely be there for New Yorkers (and people visiting the city) to connect with other professionals, hobbyists and people who are just interested in the technology’s seemingly limitless possibilities.
Are you an IoT professional, hobbyist or just someone interested in the technology? You can join IoT Central here.
Planning to load test your IoT device? Check out this article from Load Impact blog contributor, Peter Cannell: Load Testing a REST API on a Low-Power IoT Platform.
— This is Part 3 of Load Impact’s Velocity NY Preview Series. Load Impact is chatting with some of the cutting-edge developers and executives who will be speaking at Velocity NY Oct. 12-14.
“It was impossible to get regular work done because we were running around putting out fires all day.”
Does that sound familiar?
When it comes to your website, app, API, SaaS product or infrastructure, a minor problem can turn into a major crisis very quickly, and that can hurt your reputation with customers and cost you time and money.
That’s why Blackrock 3 Partners, a team made up of firefighters and technology professionals, are coming to Velocity NY to teach you the finer points of incident management.
In their tutorial, Incident Management for DevOps, Rob Schnepp, Ron Vidal and Chris Hawley will demonstrate the parallels between putting out a five-alarm fire in an apartment building and responding to a data breach.
“There’s a lot of interest in how the fire service does business because we look organized and it works,” said Schnepp. “But there’s a mystique about it because not everyone understands how organized and structured it really is.”
Blackrock 3 uses terms like “Peacetime vs. Wartime” communication and operations, “war games in production” and other phrases traditionally used by the military.
That’s not because a crashed server is equivalent to a person being seriously injured in battle, but it’s because handling adverse conditions is a skill that can be learned, practiced and fine-tuned.
The team at Blackrock 3 stresses that software companies can create an ecosystem to respond to emergencies, minimize impact and learn from those experiences. That includes setting strategies for immediate response, practicing how to start correcting problems in the middle of the crisis and designating an “incident commander.”
In order to do that, Blackrock 3 often goes to their “war games in production” strategy with their clients, which can be surprising to some.
“There are times where we go in to work with a company and plan to break stuff on purpose,” said Vidal. “Sometimes people are taken back by that at first, but how else can you prepare for the randomness of the world unless you really have to solve a problem under some level of pressure?”
After an incident has been controlled and resolved, Blackrock 3 puts a heavy focus on thorough after action reviews — commonly known by many as “post mortems.” Emergency services even have a structured plan for post mortems, too, which is another practice Blackrock 3 is bringing to its partners.
“Post mortems almost always focus on the technology aspect of a problem,” said Schnepp. “They rarely evaluate the human response and how to make that better.”
Blackrock 3 suggests striving for honest, blame-free after action reviews that analyze people’s thought process and logic during a crisis and how future training can improve responses moving forward.
While people normally wouldn’t think the fire department or other emergency services has much in common with technology companies on the surface, Schnepp and Vidal said startup founders, CTOs and everyone they’ve worked with “gets it” from the beginning.
“The same management tactics people use on oil spills can work in the tech business,” said Schnepp. “It’s not a magical formula, but the results are magical.”
Check out Blackrock 3’s Book
The team’s vast experience responding to a wide range of catastrophic events not only led them to forming Blackrock 3, but they recently authored the book, Incident Management for Operations, published by O’Reilly Media.