Events

Feed icon 28x28
Work original

Videos provided by AWS re:Invent via their YouTube Channel

AWS re:Invent 2013 Schedule

November 12 - 15, 2013

( 74 available presentations )
Work thumb
Rating: Everyone
Viewed 299 times
Recorded at: November 12, 2013
Date Posted: September 5, 2014

The Amazon Simple Workflow (Amazon SWF) service is a building block for highly scalable applications. Where Amazon EC2 helps developers scale compute and Amazon S3 helps developers scale storage, Amazon SWF helps developers scale their business logic. Customers use Amazon SWF to coordinate, operate, and audit work across multiple machines—across the cloud or their own data centers. In this power-packed session, we demonstrate the power of workflows through 7 customer stories and 7 use cases, in 7 minutes each. We show how you can use Amazon SWF for curating social media streams, processing user-generated video, managing CRM workflows, and more. We show how customers are using Amazon SWF to automate virtually any script, library, job, or workflow and scale their application pipeline cost-effectively.

Work thumb
aws
Rating: Everyone
Viewed 843 times
Recorded at: November 12, 2013
Date Posted: September 5, 2014

This is a technical architect's case study of how Loggly has employed the latest social-media-scale technologies as the backbone ingestion processing for our multi-tenant, geo-distributed, and real-time log management system. This presentation describes design details of how we built a second-generation system fully leveraging AWS services including Amazon Route 53 DNS with heartbeat and latency-based routing, multi-region VPCs, Elastic Load Balancing, Amazon Relational Database Service, and a number of pro-active and re-active approaches to scaling computational and indexing capacity.
The talk includes lessons learned in our first generation release, validated by thousands of customers; speed bumps and the mistakes we made along the way; various data models and architectures previously considered; and success at scale: speeds, feeds, and an unmeltable log processing engine.

Work thumb
Rating: Everyone
Viewed 254 times
Recorded at: November 12, 2013
Date Posted: September 5, 2014

AWS offers services that revolutionize the scale and cost for customers to extract information from large data sets, commonly called Big Data. This session analyzes Amazon CloudFront logs combined with additional structured data as a scenario for correlating log and transactional data. Successfully implementing this type of solution requires architects and developers to assemble a set of services with multiple decision points. The session provides a design and example of architecting and implementing the scenario using Amazon S3, AWS Data Pipeline, Amazon Elastic MapReduce, and Amazon Redshift. It explores loading, query performance, security, incremental updates, and design trade-off decisions.

Work thumb
Rating: Everyone
Viewed 241 times
Recorded at: November 12, 2013
Date Posted: September 10, 2014

Learn how to deliver extremely low latency, fast performance and throughput for web-scale applications built on Amazon DynamoDB. We show you how to model data, maintain maximum throughput, drive analytics, and use secondary indexes with Amazon DynamoDB. You also hear how customers have built large-scale applications and the real-world lessons they've learned along the way.

Work thumb
Rating: Everyone
Viewed 241 times
Recorded at: November 12, 2013
Date Posted: September 10, 2014

Come learn about architecting high-performance applications and production workloads using Amazon RDS for SQL Server. Understand how to migrate your data to an Amazon RDS instance, apply security best practices, and optimize your database instance and applications for high availability.

Work thumb
Rating: Everyone
Viewed 237 times
Recorded at: November 12, 2013
Date Posted: September 9, 2014

Big data technologies let you work with any velocity, volume, or variety of data in a highly productive environment. Join the General Manager of Amazon EMR, Peter Sirota, to learn how to scale your analytics, use Hadoop with Amazon EMR, write queries with Hive, develop real world data flows with Pig, and understand the operational needs of a production data platform.

Work thumb
Rating: Everyone
Viewed 253 times
Recorded at: November 12, 2013
Date Posted: September 9, 2014

How does Netflix stay on top of the operations of its Internet service with millions of users and billions of metrics? With Atlas, its own massively distributed, large-scale monitoring system. Come learn how Netflix built Atlas with multiple processing pipelines using Amazon S3 and Amazon EMR to provide low-latency access to billions of metrics while supporting query-time aggregation along multiple dimensions.

Work thumb
Rating: Everyone
Viewed 219 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

Since Amazon Redshift launched last year, it has been adopted by a wide variety of companies for data warehousing. In this session, learn how customers NASDAQ, HauteLook, and Roundarch Isobar are taking advantage of Amazon Redshift for three unique use cases: enterprise, big data, and SaaS. Learn about their implementations and how they made data analysis faster, cheaper, and easier with Amazon Redshift.

Work thumb
Rating: Everyone
Viewed 216 times
Recorded at: November 13, 2013
Date Posted: September 5, 2014

Amazon AppStream is a new service that provides developers with the ability to stream resource intensive applications, such as 3D games or interactive HD applications, from the cloud. With Amazon AppStream, mobile and PC developers have the flexibility to stream their entire application or only parts of their application that need additional cloud resources. You will learn how to build, upload, and deploy your first application, how to create clients for PC and mobile devices, and how to optimize your application for Amazon AppStream.

Work thumb
Rating: Everyone
Viewed 266 times
Recorded at: November 13, 2013
Date Posted: September 1, 2014

With AWS, companies now have the ability to develop and run their applications with speed and flexibility like never before. Working with an infrastructure that can be 100 percent API driven enables businesses to use lean methodologies and realize these benefits. This in turn leads to greater success for those who make use of these practices. In this session, we talk about some key concepts and design patterns for continuous deployment and continuous integration, two elements of lean development of applications and infrastructures.

Work thumb
Rating: Everyone
Viewed 443 times
Recorded at: November 13, 2013
Date Posted: September 5, 2014

You're on the verge of a new startup and you need to build a world-class, high-scale web application on AWS so it can handle millions of users. How do you build it quickly without having to reinvent and re-implement the best-practices of large successful Internet companies? NetflixOSS is your answer. In this session, we'll cover how an emerging startup can leverage the different open source tools that Netflix has developed and uses every day in production, ranging from baking and deploying applications (Asgard, Aminator), to hardening resiliency to failures (Hystrix, Simian Army, Zuul), making them highly distributed and load balanced (Eureka, Ribbon, Archaius) and managing your AWS resources efficiently and effectively (Edda, Ice). You'll learn how to get started using these tools, learn best practices from engineers who actually created them, so, like Netflix, you can too unleash the power of AWS and scale your application processes as you grow.

Work thumb
Rating: Everyone
Viewed 244 times
Recorded at: November 13, 2013
Date Posted: September 5, 2014

Traditionally, IT organizations have treated infrastructure components like family pets. We name them, we worry about them, and we let them wake us up at 4:00 am. Amazon CTO Werner Vogels has dubbed these behaviors as server hugging and antiquated in today's cloud infrastructures. In this breakout session, we will discuss methods and methodology to get away from server hugging and be concerned more with the overall status and life of our entire infrastructure. From making use of toss-away-able on-demand infrastructure, to monitoring services and not individual servers, to getting away from naming instances, this session helps you see your infrastructure for what it is, technology that you control.

Work thumb
Rating: Everyone
Viewed 185 times
Recorded at: November 13, 2013
Date Posted: September 5, 2014

Scaling your application as you grow should not mean slow to load and expensive to run. Learn how you can use different AWS building blocks such as Amazon ElastiCache and Amazon CloudFront to "cache everything possible" and increase the performance of your application by caching your frequently-accessed content. This means caching at different layers of the stack: from HTML pages to long-running database queries and search results, from static media content to application objects. And how can caching more actually cost less? Attend this session to find out!

Work thumb
Rating: Everyone
Viewed 167 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

Amazon DynamoDB is a fully-managed, zero-admin, high-speed NoSQL database service. Amazon DynamoDB was built to support applications at any scale. With the click of a button, you can scale your database capacity from a few hundred I/Os per second to hundreds of thousands of I/Os per second or more. You can dynamically scale your database to keep up with your application's requirements while minimizing costs during low-traffic periods. The service has no limit on storage. You also learn about Amazon DynamoDB's design principles and history.

Work thumb
Rating: Everyone
Viewed 235 times
Recorded at: November 13, 2013
Date Posted: September 5, 2014

It's difficult to imagine an app without email. When you integrate Amazon Simple Email Service (Amazon SES), you not only increase your productivity and add richness to your application but also enhance your ability to scale your application to new heights. But how do you scale the service as you grow your application? In this session, we will tell you a story and weave all the best practices of sending high volume email such as basic inbox placement, high throughput pipeline to email ISPs, message signing, and retries in the face of temporary failures on how to scale up your application to best take advantage of what Amazon SES can do for your business.

Work thumb
Rating: Everyone
Viewed 492 times
Recorded at: November 13, 2013
Date Posted: September 10, 2014

AWS Elastic Beanstalk provides a number of simple, flexible interfaces for developing and deploying your applications. In this session, learn how ThoughtWorks leverage the Elastic Beanstalk API to continuously deliver their applications with smoke tests and blue-green deployments. Also learn how to deploy your apps with Git and eb, a powerful CLI that allows developers to create, configure, and manage Elastic Beanstalk applications and environments from the command line.

Work thumb
Rating: Everyone
Viewed 251 times
Recorded at: November 13, 2013
Date Posted: September 10, 2014

This session explains how Netflix is using the capabilities of AWS to balance the rate of change against the risk of introducing a fault. Netflix uses a modular architecture with fault isolation and fallback logic for dependencies to maximize availability. This approach allows for rapid independent evolution of individual components to maximize the pace of innovation and A/B testing, and offers nearly unlimited scalability as the business grows. Learn how we balance managing change to (or subtraction from) the customer experience, while aggressively scraping barnacle features that add complexity for little value.

Work thumb
Rating: Everyone
Viewed 237 times
Recorded at: November 13, 2013
Date Posted: September 5, 2014

Netflix sought to increase availability beyond the capabilities of a single region. How is that possible, you ask? We walk you through the journey that Netflix underwent to redesign and operate our service to achieve this lofty goal. Using the principles of isolation and redundancy, our destination is a fully redundant active-active deployment architecture where end users can be served out of multiple AWS regions. If one region fails, another can quickly take its place. Along the way, we'll explore our Isthmus architecture, a stepping stone toward full active-active deployment where only the edge services are redundantly deployed to multiple regions. We'll cover real-world challenges we had to overcome, like near-real-time data replication, and the operational tools and best-practices we needed to develop to make it a success. Discover a whole new breed of monkeys we created to test multi-regional resiliency scenarios.

Work thumb
Rating: Everyone
Viewed 223 times
Recorded at: November 13, 2013
Date Posted: September 10, 2014

Learn about architecture best practices for combining AWS storage and database technologies. We outline AWS storage options (Amazon EBS, Amazon EC2 Instance Storage, Amazon S3 and Amazon Glacier) along with AWS database options including Amazon ElastiCache (in-memory data store), Amazon RDS (SQL database), Amazon DynamoDB (NoSQL database), Amazon CloudSearch (search), Amazon EMR (hadoop) and Amazon Redshift (data warehouse). Then we discuss how to architect your database tier by using the right database and storage technologies to achieve the required functionality, performance, availability, and durability—at the right cost.

Work thumb
Rating: Everyone
Viewed 247 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

This session will describe how members of the US Large Hadron Collider (LHC) community have benchmarked the usage of Amazon Elastic Compute Cloud (Amazon EC2) resource to simulate events observed by experiments at the European Organization for Nuclear Research (CERN). Miron Livny from the University of Wisconsin-Madison who has been collaborating with the US-LHC community for more than a decade will detail the process for benchmarking high-throughput computing (HTC) applications running across multiple AWS regions using the open source HTCondor distributed computing software. The presentation will also outline the different ways that AWS and HTCondor can help meet the needs of compute intensive applications from other scientific disciplines.

Work thumb
Rating: Everyone
Viewed 228 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

(Presented by SAP) SAP HANA, available on the AWS Cloud, is an industry transforming in-memory platform, which has been adopted by many startups and ISVs, as well as traditional SAP enterprise customers. SAP HANA converges database and application platform capabilities in-memory to transform transactions, analytics, text analysis, predictive, and spatial processing so businesses can operate in real-time. Please join us to learn what SAP HANA can do for you!

Doug Turner, CEO of Mantis Technologies, and an early adopter of SAP HANA One on AWS, will present and share his experience migrating his Sentiment Analysis solution from MySQL to SAP HANA One. He will talk about following benefits that he achieved with this migration:

-Dramatic simplification of his system architecture and landscape
-System consolidation by moving from 23 MySQL instances to one SAP HANA One instance
-Reduced overall AWS infrastructure cost as well as reduced admin effort and efficiency

We will conclude with an overview of all the key SAP HANA capabilities on the AWS Cloud like text analysis, predictive analytics, geospatial, data integration. We will round out the session with an in-depth view of what new HANA deployment options are available on the AWS Cloud like customers' ability to bring their own licenses (BYOL) of SAP HANA to run on AWS in a variety of configurations ranging from 244GB up to 1.22TB.

Work thumb
Rating: Everyone
Viewed 165 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

MACPAC is a federal legislative branch agency tasked with reviewing state and federal Medicaid and Children's Health Insurance Program (CHIP) access and payment policies and making recommendations to Congress. By March 15 and again by June 15 each year, the agency produces a comprehensive report for Congress that compiles results from Medicaid and CHIP data sources for the 50 states and territories. The CIO of MACPAC wanted a secure, cost-effective, high performance platform that met their needs to crunch this large amount of health data. In this session, learn how MACPAC and 8KMiles helped set up the agency's Big Data/HPC analytics platform on AWS using SAS analytics software.

Work thumb
Rating: Everyone
Viewed 241 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

With AWS, companies now have the ability to develop and run their applications with speed and flexibility like never before. Working with an infrastructure that can be 100 percent API driven enables businesses to use lean methodologies and realize these benefits. This in turn leads to greater success for those who make use of these practices. In this session, we talk about some key concepts and design patterns for continuous deployment and continuous integration, two elements of lean development of applications and infrastructures.

Work thumb
Rating: Everyone
Viewed 343 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

Traditionally, content delivery networks (CDNs) were known to accelerate static content. Amazon CloudFront has come a long way and now supports delivery of entire websites that include dynamic and static content. In this session, we introduce you to CloudFront's dynamic delivery features that help improve the performance, scalability, and availability of your website while helping you lower your costs. We talk about architectural patterns such as SSL termination, close proximity connection termination, origin offload with keep-alive connections, and last-mile latency improvement. Also learn how to take advantage of Amazon Route 53's health check, automatic failover, and latency-based routing to build highly available web apps on AWS.

Work thumb
Rating: Everyone
Viewed 168 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

Configure once, deploy anywhere is one of the most sought-after enterprise operations requirements. Large-scale IT shops want to keep the flexibility of using on-premises and cloud environments simultaneously while maintaining the monolithic custom, complex deployment workflows and operations. This session brings together several hybrid enterprise requirements and compares orchestration and deployment models in depth without a vendor pitch or a bias. This session outlines several key factors to consider from the point of view of a large-scale real IT shop executive. Since each IT shop is unique, this session compares strengths, weaknesses, opportunities, and the risks of each model and then helps participants create new hybrid orchestration and deployment options for the hybrid enterprise environments.

Work thumb
Rating: Everyone
Viewed 187 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

Modern IT is embracing hybrid cloud as part of their overall IT strategy. AWS Direct Connect provides a critical tool for ingesting web scale data or leveraging custom appliances and legacy applications. This talk discusses the unique benefits of using Direct Connect to reduce cost, increase bandwidth, and provide a more consistent network experience between on-premises resources and the cloud. It details the components, requirements, and configuration options

Work thumb
Rating: Everyone
Viewed 286 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual data center that you define. In this session you learn how to leverage the VPC networking constructs to configure a highly available and secure virtual data center on AWS for your application. We cover best practices around choosing an IP range for your VPC, creating subnets, configuring routing, securing your VPC, establishing VPN connectivity, and much more. The session culminates in creating a highly available web application stack inside of VPC and testing its availability with Chaos Monkey.

Work thumb
Rating: Everyone
Viewed 185 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

Magazine Luiza, one of the largest retail chains in Brazil, developed an in-house product recommendation system, built on top of a large knowledge Graph. AWS resources like Amazon EC2, Amazon SQS, Amazon ElastiCache and others made it possible for them to scale from a very small dataset to a huge Cassandra cluster. By improving their big data processing algorithms on their in-house solution built on AWS, they improved their conversion rates on revenue by more than 25 percent compared to market solutions they had used in the past.

Work thumb
Rating: Everyone
Viewed 226 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

Automating application deployments is old hat. Imagine a world where you can build everything from your data center up to your application via code and automate it. It's a reality, and it's called AWS CloudFormation. At Adobe, we use AWS CloudFormation to define our infrastructure in AWS as code. By using AWS CloudFormation in combination with other tools, including OpsCode Chef, we are able to create highly flexible and customized workflows that ensure consistent and audited deployments. From our VPCs to our applications, we can build and tear down in a matter of minutes. Come see how we have put the power of AWS CloudFormation to work at Adobe using advanced techniques such as substacks, identity and access management roles, bootstrapping into Chef, and more—including a demo of our automation environment. For us, AWS CloudFormation is the service that ties a pretty bow around all of the other powerful AWS offerings.

Work thumb
Rating: Everyone
Viewed 171 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

Big Data is more than petabytes and capacity. It is the opportunity to use data to your advantage to make smart decisions that increase productivity and grow your business. In this session, you'll learn about the latest advancements in data analytics, databases, storage, and high performance computing (HPC) at AWS and discover how to put data to work in your own organization.

Work thumb
Rating: Everyone
Viewed 206 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

Understanding the factors that drive consumer purchase behavior make brands better marketers. In this session, join the Vice President of Mechanical Turk to explore how retail businesses are marrying human judgment with large scale data analytics without sacrificing efficiency or scalability. We'll highlight real world examples and introduce Jon Brelig, CTO of InfoScout, to explore how his company is leveraging a combination of automated methods and Mechanical Turk to build out a real-world analytics solution relied upon by brands, such as P&G, Unilever, and General Mills. By extracting item-level purchase data from more than 40,000 consumer receipt images each day and associating it with specific products, brands, user surveys and other digital marketing signals, Infoscout is able to rapidly gauge changes in consumer behavior and market share with remarkable granularity.

Work thumb
Rating: Everyone
Viewed 229 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

A few years ago, Netflix had a fairly classic business intelligence tech stack. Now, things have changed. Netflix is a heavy user of AWS for much of its ongoing operations, and Data Science & Engineering (DSE) is no exception. In this talk, we dive into the Netflix DSE architecture: what and why. Key topics include their use of Big Data technologies (Cassandra, Hadoop, Pig + Python, and Hive); their Amazon S3 central data hub; their multiple persistent Amazon EMR clusters; how they benefit from AWS elasticity; their data science-as-a-service approach, how they made a hybrid AWS/data center setup work well, their open-source Hadoop-related software, and more.

Work thumb
Rating: Everyone
Viewed 319 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

Researchers at Clemson University assigned a student summer intern to explore bioinformatics cloud solutions that leverage MPI, the OrangeFS parallel file system, AWS CloudFormation templates, and a Cluster Scheduler. The result was an AWS cluster that runs bioinformatics code optimized using MPI-IO. We give an overview of the process and show how easy it is to create clusters in AWS.

Work thumb
Rating: Everyone
Viewed 268 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

This presentation will introduce Kinesis, the new AWS service for real-time streaming big data ingestion and processing.
We'll provide an overview of the key scenarios and business use cases suitable for real-time processing, and discuss how AWS designed Amazon Kinesis to help customers shift from a traditional batch-oriented processing of data to a continual real-time processing model. We'll provide an overview of the key concepts, attributes, APIs and features of the service, and discuss building a Kinesis-enabled application for real-time processing. We'll also contrast with other approaches for streaming data ingestion and processing. Finally, we'll also discuss how Kinesis fits as part of a larger big data infrastructure on AWS, including S3, DynamoDB, EMR, and Redshift.

Work thumb
Rating: Everyone
Viewed 270 times
Recorded at: November 13, 2013
Date Posted: September 9, 2014

AWS offers many data services, each optimized for a specific set of structure, size, latency, and concurrency requirements. Making the best use of all specialized services has historically required custom, error-prone data transformation and transport. Now, users can use the AWS Data Pipeline service to orchestrate data flows between Amazon S3, Amazon RDS, Amazon DynamoDB, Amazon Redshift, and on-premise data stores, seamlessly and efficiently applying EC2 instances and EMR clusters to process and transform data. In this session, we demonstrate how you can use AWS Data Pipeline to coordinate your Big Data workflows, applying the optimal data storage technology to each part of your data integration architecture. Swipely's Head of Engineering shows how Swipely uses AWS Data Pipeline to build batch analytics, backfilling all their data, while using resources efficiently. Consequently, Swipely launches novel product features with less development time and less operational complexity.

Work thumb
Rating: Everyone
Viewed 281 times
Recorded at: November 13, 2013
Date Posted: September 10, 2014

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. In this Zero to Sixty session, learn about CloudFormation's latest features along with best practices for using them, including maintaining complex environments with CloudFormation, template management and re-use, and controlling stack updates. Demos and code samples are available to all session attendees.
Are you new to AWS CloudFormation? Get up to speed for this session by first completing the 60-minute Fundamentals of CloudFormation lab in the Self Paced Lab Lounge.

Work thumb
Rating: Everyone
Viewed 242 times
Recorded at: November 13, 2013
Date Posted: September 10, 2014

This session walks through the mechanics of AWS bill computation and consolidated billing to help you understand your bill. AWS billing has many features to help you manage and control your costs in the AWS cloud environment including detailed billing reports, programmatic access, cost allocation, billing alerts, and IAM access. We provide an overview of these features and then demonstrate how to use and incorporate them into your own account setup.

Work thumb
Rating: Everyone
Viewed 263 times
Recorded at: November 14, 2013
Date Posted: September 5, 2014

Today's applications work across many different data assets - documents stored in Amazon S3, metadata stored in NoSQL data stores, catalogs and orders stored in relational database systems, raw files in filesystems, etc. Building a great search experience across all these disparate datasets and contexts can be daunting. Amazon CloudSearch provides simple, low-cost search, enabling your users to find the information they are looking for. In this session, we will show you how to integrate search with your application, including key areas such as data preparation, domain creation and configuration, data upload, integration of search UI, search performance and relevance tuning. We will cover search applications that are deployed for both desktop and mobile devices.

Work thumb
Rating: Everyone
Viewed 276 times
Recorded at: November 14, 2013
Date Posted: September 5, 2014

Amazon Simple Queue Service (Amazon SQS) makes it easy and inexpensive to enhance the scalability and reliability of your cloud application. In this session, we demonstrate design patterns for using Amazon SQS in conjunction with Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Elastic MapReduce, Amazon Relational Database Service, and Amazon Redshift. Shazam will share their experience of combining Amazon SQS with Amazon DynamoDB to support a Super Bowl advertising campaign.

Work thumb
Rating: Everyone
Viewed 275 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

Peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns of our Memcached and Redis offerings and how customers have used them for in-memory operations and achieved improved latency and throughput for applications. During this session, we review best practices, design patterns, and anti-patterns related to Amazon ElastiCache. We also include a demo where we enable Amazon ElastiCache for a web application and show the resulting performance improvements.

Work thumb
Rating: Everyone
Viewed 439 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

Want to learn how to build your own Google Analytics? Learn how to build a scalable architecture using node.js, Amazon DynamoDB, and Amazon EMR. This architecture is used by ScribbleLive to track billions of engagement minutes per month. In this session, we go over the code in node.js, how to store the data in Amazon DynamoDB, and how to roll-up the data using Hadoop and Hive. Attend this session to learn how to move data quickly at any scale as well as how to use genomic analysis tools and pipelines for next generation sequencers using Globus on AWS.

Work thumb
Rating: Everyone
Viewed 187 times
Recorded at: November 14, 2013
Date Posted: September 5, 2014

As troves of data grow exponentially, the number of analytical jobs that process the data also grows rapidly. When you have large teams running hundreds of analytical jobs, coordinating and scheduling those jobs becomes crucial. Using Amazon Simple Workflow Service (Amazon SWF) and AWS Data Pipeline, you can create automated, repeatable, schedulable processes that reduce or even eliminate the custom scripting and help you efficiently run your Amazon Elastic MapReduce (Amazon EMR) or Amazon Redshift clusters. In this session, we show how you can automate your big data workflows. Learn best practices from customers like Change.org, KickStarter and UnSilo on how they use AWS to gain business insights from their data in a repeatable and reliable fashion.

Work thumb
Rating: Everyone
Viewed 197 times
Recorded at: November 14, 2013
Date Posted: September 10, 2014

You already know that AWS CloudFormation is a powerful tool for provisioning and managing your AWS infrastructure, but did you know that it can also provision and manage resources outside of AWS? Did you know that CloudFormation can fully bootstrap your EC2 instances, securely download data from S3, and even supports Mustache templates? In this session you will go on a deep dive, touring of some of CloudFormation's most advanced features with a member of the team that built the service. Explore custom resources, cfn-init, S3 authentication, and Mustache templates in a series of technical demos with code samples available for download afterwards.

Work thumb
Rating: Everyone
Viewed 200 times
Recorded at: November 14, 2013
Date Posted: September 10, 2014

With AWS you can choose the right database technology and software for the job. Given the myriad of choices, from relational databases to non-relational stores, this session provides details and examples of some of the choices available to you. This session also provides details about real-world deployments from customers using Amazon RDS, Amazon ElastiCache, Amazon DynamoDB, and Amazon Redshift.

Work thumb
Rating: Everyone
Viewed 183 times
Recorded at: November 14, 2013
Date Posted: September 5, 2014

(Presented by Skytap) Complex multi-tier enterprise applications that have been under development for decades assume reliable hardware and typically have dependencies on underlying operating systems, hardware configurations, and network topologies. The boundary between one application or service and another is often fuzzy, with many interdependencies. These traits make some enterprise applications difficult to refactor and move to a public cloud. Even the teams that manage these applications can be unfamiliar with cloud terminology and concepts. In this session for enterprise IT architects and developers, Brad Schick, CTO of Skytap and Skytap customers Fulcrum, DataXu and F5 will share their insights into why the evolution of enterprise applications will lead to hybrid applications that opportunistically take advantage of cloud-based services. Brad will then demonstrate Skytap Cloud with Amazon Web Services and discuss how enterprises can easily achieve this integration today for application development and testing.

Work thumb
Rating: Everyone
Viewed 174 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

1,000,000,000,000,000 bytes. On demand. Online. Live. Big doesn't quite describe this data. Amazon Web Services makes it possible to construct highly elastic computing systems, and you can further increase cost efficiency by leveraging the Spot Pricing model for Amazon EC2. We showcase elasticity by demonstrating the creation and teardown of a petabyte-scale multiregion MongoDB NoSQL database cluster, using Amazon EC2 Spot Instances, for as little as $200 in total AWS costs. Oh and it offers up four million IOPS to storage via the power of PIOPS EBS. Christopher Biow, Principal Technologist at 10gen | MongoDB covers MongoDB best practices on AWS, so you can implement this NoSQL system (perhaps at a more pedestrian hundred-terabyte scale?) confidently in the cloud. You could build a massive enterprise warehouse, process a million human genomes, or collect a staggering number of cat GIFs. The possibilities are huMONGOus.

Work thumb
Rating: Everyone
Viewed 196 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

SmugMug.com is a popular hosting and commerce platform for photo enthusiasts with hundreds of thousands of subscribers and millions of viewers. Learn now SmugMug uses Amazon DynamoDB to provide customers detailed information about millions of daily image and video views. Smugmug shares code and information about their stats stack, which includes an HTTP interface to Amazon DynamoDB and also interfaces with their internal PHP stack and other tools such as Memcached. Get a detailed picture of lessons learned and the methods SmugMug uses to create a system that is easy to use, reliable, and high performing.

Work thumb
Rating: Everyone
Viewed 203 times
Recorded at: November 14, 2013
Date Posted: September 5, 2014

One of the most critical roles of an IT department is to protect and serve its corporate data. As a result, IT departments spend tremendous amounts of resources developing, designing, testing, and optimizing data recovery and replication options in order to improve data availability and service response time. This session outlines replication challenges, key design patterns, and methods commonly used in today's IT environment. Furthermore, the session provides different data replication solutions available in the AWS cloud. Finally, the session outlines several key factors to be considered when implementing data replication architectures in the AWS cloud.

Work thumb
Rating: Everyone
Viewed 168 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

Woot, an Amazon subsidiary, specializes in offering great new product deals every day. Woot's deeply discounted deals; and signature events like the 'Woot Off 'and 'Bag of Crap' sales launch at specific times throughout the day, and the resulting spiky traffic patterns are highly correlated to revenue.
In this session, we offer an unvarnished perspective into how Woot uses services such as Amazon DynamoDB, EC2, ELB, CloudSearch, CloudFront, and SES. Learn how to architect for security and PCI for a retail website running on AWS. Dig into the technical details of a data-store comparison between DynamoDB, Mongo, Oracle, and SQLServer, to find the right solution for unique workloads. Join us as we share our musings and real-lessons learned from using a cocktail of AWS services. We encourage you to attend even if none of this makes sense or is interesting. Don't miss the opportunity to hang out with Mortimer the Woot monkey and his crew and to walk away with one of our legendary flying monkeys.

Work thumb
Rating: Everyone
Viewed 336 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

As more customers adopt Amazon Virtual Private Cloud architectures, the features and flexibility of the service are squaring off against increasingly complex design requirements. This session follows the evolution of a single regional VPC into a multi-VPC, multi-region design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, managing multi-tenant VPCs, conducting VPC-to-VPC traffic, extending corporate federation and name services into VPC, running multiple hybrid environments over AWS Direct Connect, and integrating corporate multiprotocol label switching (MPLS) clouds into multi-region VPCs.

Work thumb
Rating: Everyone
Viewed 386 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

Amazon Simple Queue Service (SQS) and Amazon DynamoDB build together a really fast, reliable and scalable layer to receive and process high volumes of messages based on its distributed and high available architecture. We propose a full system that would handle any volume of data or level of throughput, without losing messages or requiring other services to be always available. Also, it enables applications to process messages asynchronously and includes more compute resources based on the number of messages enqueued.
The whole architecture helps applications reach predefined SLAs as we can add more workers to improve the whole performance. In addition, it decreases the total costs because we use new workers briefly and only when they are required.

Work thumb
Rating: Everyone
Viewed 197 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

This session tells the story of how security-minded enterprises provide end-to-end protection of their sensitive data in AWS. Learn about the enterprise security architecture decisions made by Fortune 500 organizations during actual sensitive workload deployments as told by the AWS professional service security, risk, and compliance team members who lived them. In this technical walkthrough, we share lessons learned from the development of enterprise security strategy, security use-case development, end-to-end security architecture & service composition, security configuration decisions, and the creation of AWS security operations playbooks to support the architecture.

Work thumb
Rating: Everyone
Viewed 139 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

In this talk, hear about two high-performant research services developed and operated by the Computation Institute at the University of Chicago running on AWS. Globus.org, a high-performance, reliable, robust file transfer service, has over 10,000 registered users who have moved over 25 petabytes of data using the service. The Globus service is operated entirely on AWS, leveraging Amazon EC2, Amazon EBS, Amazon S3, Amazon SES, Amazon SNS, etc. Globus Genomics is an end-to-end next-gen sequencing analysis service with state-of-art research data management capabilities. Globus Genomics uses Amazon EC2 for scaling out analysis, Amazon EBS for persistent storage, and Amazon S3 for archival storage. Attend this session to learn how to move data quickly at any scale as well as how to use genomic analysis tools and pipelines for next generation sequencers using Globus on AWS.

Work thumb
Rating: Everyone
Viewed 172 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

(Presented by Basho) This session will discuss the transformation of the most widely distributed cable TV network in the United States, building on one of the world's most visited digital properties, to create a world class Big Data platform.
Architects, CTOs, CIOs, IT Director, and development managers will learn how to run highly scalable analytics workloads on Amazon EC2 and Amazon EMR for complex, real-time analysis of large data sets. All while decreasing time to results and increasing business agility. Bryson Koehler, EVP & CIO of The Weather Company, will discuss architecture, technology choices, performance results and business benefits realized as part of their use of AWS services to host an exciting set of weather.com solutions and generate new revenue streams.
Weather impacts over 30% of the global GDP daily and is the source of vast amounts of data collection. The Weather Company is the leader in weather forecasting and is bringing the world's most accurate forecasting capabilities alive in a full suite of data APIs built fully on Infrastructure as a Service platforms, including AWS and next generation products like Basho Riak, Hadoop, and Dasein.
This session will discuss how the application of these technologies help keep people safe and helps businesses plan and become more profitable, thanks to the latest intersection of consumer behavior and weather forecasting and reporting.

Work thumb
Rating: Everyone
Viewed 187 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

Learn how you can use Amazon ElastiCache to easily deploy a Memcached or Redis compatible, in-memory caching system to speed up your application performance. We show you how to use Amazon ElastiCache to improve your application latency and reduce the load on your database servers. We'll also show you how to build a caching layer that is easy to manage and scale as your application grows. During this session, we go over various scenarios and use cases that can benefit by enabling caching, and discuss the features provided by Amazon ElastiCache.

Work thumb
Rating: Everyone
Viewed 235 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

Learn how Amazon's enterprise data warehouse, one of the world's largest data warehouses managing petabytes of data, is leveraging Amazon Redshift. Learn about Amazon's enterprise data warehouse best practices and solutions, and how they're using Amazon Redshift technology to handle design and scale challenges.

Work thumb
Rating: Everyone
Viewed 193 times
Recorded at: November 14, 2013
Date Posted: September 9, 2014

Learn how to monitor your database performance closely and troubleshoot database issues quickly using a variety of features provided by Amazon RDS and MySQL including database events, logs, and engine-specific features. You also learn about the security best practices to use with Amazon RDS for MySQL. In addition, you learn about how to effectively move data between Amazon RDS and on-premises instances. Lastly, you learn the latest about MySQL 5.6 and how you can take advantage of its newest features with Amazon RDS.

Work thumb
Rating: Everyone
Viewed 199 times
Recorded at: November 15, 2013
Date Posted: September 9, 2014

Learn how to take advantage of Amazon RDS to run highly-available and performance-intensive production applications on AWS. We show you what you can do to achieve the highest levels of availability and performance for your relational databases. You learn how easy it is to architect for these requirements using several Amazon RDS features, such as Multi-AZ deployments, read replicas, and Provisioned IOPS storage. In addition, you learn how to quickly architect for the level of disaster recovery required by your business. Finally, some of our customers share how they built very high performing web and enterprise applications on Amazon RDS.

Work thumb
Rating: Everyone
Viewed 268 times
Recorded at: November 15, 2013
Date Posted: September 10, 2014

AWS OpsWorks lets you model your application with layers that define the building blocks of your application: load balancers, application servers, databases, etc. But did you know that you can also extend OpsWorks layers or build your own custom layers? Whether you need to perform a specific task or install a new software package, OpsWorks gives you the tools to install and configure your instances consistently, and evolve them in an automated and predictable fashion through your application's lifecycle. We'll dive into the development process including how to use attributes, recipes, and lifecycle events; show how to develop your environment locally; and provide troubleshooting steps that reduce your development time.

Work thumb
Rating: Everyone
Viewed 165 times
Recorded at: November 15, 2013
Date Posted: September 10, 2014

(Presented by Capgemini) In this session Capgemini discusses how their enterprise customers leverage AWS using the COMPLETE platform to deploy and manage applications such as SAP, Business Information Management Elastic Analytics, and mobility solutions. This session also shows detailed AWS architectures they are delivering to clients and how Capgemini is using AWS infrastructure internally.

Work thumb
Rating: Everyone
Viewed 235 times
Recorded at: November 15, 2013
Date Posted: September 10, 2014

Over the past year, mobile in-app feedback provider Apptentive has scaled MongoDB on AWS from a single machine to a sharded, thousands-of-operations-per-second, several hundred gigabyte cluster. This session—packed with demos, code, and actual performance numbers—shares the lessons learned along the way. Topics include picking the right tools for the job (instance sizing and selection, I/O choices, and topological choices); using chef/AWS OpsWorks and AWS CloudFormation to deploy and scale; monitoring with Amazon CloudWatch and MMS; managing backups with Amazon EBS snapshots; and using Amazon Elastic MapReduce alongside MongoDB instances.

Work thumb
Rating: Everyone
Viewed 209 times
Recorded at: November 15, 2013
Date Posted: September 9, 2014

SmugMug spent six years split between its datacenters and AWS. Find out how and why SmugMug went 100% AWS, migrating 30 TB of databases, hundreds of frontends, load balancing, and caches, across the US in one night with zero downtime.We show you specific techniques and processes that made our large-scale migration a resounding success: moving massive MySQL databases, testing and sizing a new AWS infrastructure, automating AWS operations, managing the risks involved in wholesale infrastructure change, and architecting for reliability in multiple AWS Availability Zones. We talk about the performance, scalability, operational, and business benefits and challenges we've seen since moving 100% to AWS. Finally, we share secrets about our favorite AWS products.

Work thumb
Rating: Everyone
Viewed 273 times
Recorded at: November 15, 2013
Date Posted: September 9, 2014

GraphLab is like Hadoop for graphs in that it enables users to easily express and execute machine learning algorithms on massive graphs. In this session, we illustrate how GraphLab leverages Amazon EC2 and advances in graph representation, asynchronous communication, and scheduling to achieve orders-of-magnitude performance gains over systems like Hadoop on real-world data.

Work thumb
Rating: Everyone
Viewed 229 times
Recorded at: November 15, 2013
Date Posted: September 9, 2014

Yelp is evolving from a purely hosted infrastructure environment to running many systems in AWS—paving the way for their growth to 108 million monthly visitors (source: Google Analytics). Embracing a cloud culture reduced reliability issues, sped up the pace of innovation, and helped them support dozens of data-intensive Yelp features, including search relevance, usage graphs, review highlights, spam filtering, and advertising optimizations. Today, Yelp runs 7+ TB hosted databases, 250+ GB compressed logs per day in Amazon S3, and hundreds of Amazon Elastic MapReduce jobs per day. In this session, Yelp engineers share the secrets of their success and show how they achieved big wins with Amazon EMR and open source libraries, policies around development, privacy, and testing.

Work thumb
Rating: Everyone
Viewed 165 times
Recorded at: November 15, 2013
Date Posted: September 9, 2014

Running high-performance scientific and engineering applications is challenging no matter where you do it. Join IT executives from Hitachi Global Storage Technology, The Aerospace Corporation, Novartis, and Cycle Computing and learn how they have used the AWS cloud to deploy mission-critical HPC workloads.
Cycle Computing leads the session on how organizations of any scale can run HPC workloads on AWS. Hitachi Global Storage Technology discusses experiences using the cloud to create next-generation hard drives. The Aerospace Corporation provides perspectives on running MPI and other simulations, and offer insights into considerations like security while running rocket science on the cloud. Novartis Institutes for Biomedical Research talks about a scientific computing environment to do performance benchmark workloads and large HPC clusters, including a 30,000-core environment for research in the fight against cancer, using the Cancer Genome Atlas (TCGA).

Work thumb
Rating: Everyone
Viewed 165 times
Recorded at: November 15, 2013
Date Posted: September 9, 2014

If you've ever developed code for processing data, you know what a mess it can be—especially on Hadoop. You lack debugging tools, instant feedback, automated tests, and a sane deploy. Mortar has developed a modern framework for data processing on Hadoop and Amazon Elastic MapReduce. It is a free, open framework providing instant, step-by-step execution visibility, automated testing, reusable components, and one-button deployment. See how Mortar demonstrates this framework on Amazon EMR on a sample data set to solve a big data problem.

Work thumb
Rating: Everyone
Viewed 355 times
Recorded at: November 15, 2013
Date Posted: September 9, 2014

This presentation will introduce Kinesis, the new AWS service for real-time streaming big data ingestion and processing.
We'll provide an overview of the key scenarios and business use cases suitable for real-time processing, and how Kinesis can help customers shift from a traditional batch-oriented processing of data to a continual real-time processing model. We'll explore the key concepts, attributes, APIs and features of the service, and discuss building a Kinesis-enabled application for real-time processing. We'll walk through a candidate use case in detail, starting from creating an appropriate Kinesis stream for the use case, configuring data producers to push data into Kinesis, and creating the application that reads from Kinesis and performs real-time processing. This talk will also include key lessons learnt, architectural tips and design considerations in working with Kinesis and building real-time processing applications.

Work thumb
Rating: Everyone
Viewed 228 times
Recorded at: November 15, 2013
Date Posted: September 9, 2014

By turning the data center into an API, AWS has enabled Sumo Logic to build a very large scale IT operational analytics platform as a service at unprecedented scale and velocity. Based around Amazon EC2 and Amazon S3, the Sumo Logic system is ingesting many terabytes of unstructured log data a day while at the same time delivering real-time dashboards and supporting hundreds of thousands of queries against the collected data. When co-founder and CTO Christian Beedgen started Sumo Logic, it was obvious that the service would have to scale quickly and elastically, and AWS has been providing the perfect infrastructure for this endeavor from the start.
In this talk, Christian dives into the core Sumo Logic architecture and explains which AWS services are making Sumo Logic possible. Based around an in-house developed automation and continuous deployment system, Sumo Logic is leveraging Amazon S3 in particular for large-scale data management and Amazon DynamoDB for cluster configuration management. By relying on automation, Sumo Logic is also able to perform sophisticated staging of new code for rapid deployment. Using the log-based instrumentation of the Sumo Logic codebase, Christian will dive into the performance characteristics achieved by the system today and share war stories about lessons learned along the way.

Work thumb
Rating: Everyone
Viewed 325 times
Recorded at: November 15, 2013
Date Posted: September 9, 2014

Amazon RDS makes it cheap and easy to deploy, manage, and scale relational databases using a familiar MySQL, Oracle, or Microsoft SQL Server database engine. Amazon RDS can be an excellent choice for running many large, off-the-shelf enterprise applications from companies like JD Edwards, Oracle, PeopleSoft, and Siebel. In this session, you learn how to best leverage Amazon RDS for use with enterprise applications and learn about best practices and data migration strategies.

Work thumb
Rating: Everyone
Viewed 212 times
Recorded at: November 15, 2013
Date Posted: September 10, 2014

Migrating data from the existing environments to AWS is a key part of the overall migration to Amazon RDS for most customers. Moving data into Amazon RDS from existing production systems in a reliable, synchronized manner with minimum downtime requires careful planning and the use of appropriate tools and technologies. Because each migration scenario is different, in terms of source and target systems, tools, and data sizes, you need to customize your data migration strategy to achieve the best outcome. In this session, we do a deep dive into various methods, tools, and technologies that you can put to use for a successful and timely data migration to Amazon RDS.

Work thumb
Rating: Everyone
Viewed 198 times
Recorded at: November 15, 2013
Date Posted: September 10, 2014

(Presented by Datadog) Gaining visibility into an application stack's performance is necessary to understand how the stack is running and to configure alerts effectively. Instrumenting each component in the stack to produce metrics provides this insight. In an environment that scales automatically, hosts are being automatically added, removed, and reassigned. Using an automated methodology for instrumentation in these environments can improve results and save you time. This session includes a live demo component to show auto-instrumentation of hosts, graphing, and alerting on metrics.

Work thumb
Rating: Everyone
Viewed 170 times
Recorded at: November 15, 2013
Date Posted: September 10, 2014

The University of Minnesota recently closed asuccessful bid for IaaS and selected AWS as a campus-wide partner. In thistalk, University staff will discuss the way the bid process was handled, whatthey encountered in the responses, and what lessons they learned. They will alsohighlight the benefits of a reseller partner for the process and the potentialbenefits of managed services from a reseller for an institution like theUniversity of Minnesota. The discussion will also include some current andupcoming use cases for the University.

Work thumb
Rating: Everyone
Viewed 182 times
Recorded at: November 16, 2013
Date Posted: September 10, 2014

In this talk, the engineering team behind the Intuit PaaS takes you through the design of our shared PaaS and its integration with AWS OpsWorks. We give an overview of why we decided to build our own PaaS, why we chose OpsWorks as the engine, technical details of the implementation as well as all the challenges in building a shared runtime environment for different applications. Anyone interested in OpsWorks or building a PaaS should attend for key lessons from our journey.

Work thumb
Rating: Everyone
Viewed 320 times
Recorded at: November 16, 2013
Date Posted: September 10, 2014

(Presented by Red Hat) Learn how you can quickly develop, host, and scale applications in the AWS cloud with Red Hat's OpenShift. In this session, we walk you through the simple process of deploying and managing your own Linux-based application in the cloud using live demonstrations. We also discuss key use-cases and benefits to automated configuration, deployment, and administration of application stacks.