Open your issue trackers, get your pull requests ready, and join John-David Dalton, co-maintainer of jsperf.com and creator of Lo-Dash, to perf the web forward as he discusses commonly overlooked performance issues, rethinks established code patterns, and shares tips you can apply to your own projects and favorite libraries.
Oh no! You have a bug in your app, but you have no idea where it is. I’ll walk you through how we found and squashed a gnarly bug in socket.io using wireshark, chrome’s developer tools, lots of logging, and pretty graphs. I’ll also show you some good tips and tricks for tracking down and squashing bugs of your own.
Functions from libraries such as scipy.optimize, scipy.spatial, statsmodels, and numdifftools comprise the core of the pySI.calibrate routines, which are automatically constructed depending upon the specified model inputs. As a result, the user can focus on identifying different flow systems and understanding the associated spatial processes, rather than the algorithmic divergences which emerge between different models. After calibration is completed, the estimated parameters and their diagnostic statistics can be reported in a uniform fashion. Using functions within pySI.simulate, the parameter estimates can act as inputs in order to predict new flows. More recently developed models, which do not require input parameters, are also made available, allowing comparisons amongst results from differing conceptual formulations. Finally, results may be visualized with plots and networks via matplotlib, igraph, and networkx. Overall, the pySI framework will increase the accessibility of spatial interaction modelling while also serving as a tool which can help new users understand the associated methodological intricacies.
Within this presentation, the concept of spatial interaction and a few key modelling terms will first be introduced, along with several example applications. Next, two traditional techniques for calibrating spatial interaction models, Poisson generalized linear regression and direct maximum likelihood estimation will be contrasted. It will then be demonstrated how this new framework will allow users to execute either form of calibration using identical input variables, which are based upon a pandas DataFrame specification, without any significant mathematical or statistical training. Results from two different conceptual models will be compared to illustrate how pySI can be used to explore different methods and models of spatial interaction.
Ben Golub and Solomon Hykes speech giving a thank you speech for the 1 year of Docker at the Docker HQ.
We say things like “don’t block the event loop”, “make sure your code runs at 60 frames-per-second”, “well of course, it won’t work, that function is an asynchronous callback!”
IPython provides tools for interactive exploration of code and data. IPython.parallel is the part of IPython that enables an interactive model for parallel execution, and aims to make distributing your work on a multicore computer, local clusters or cloud services such as AWS or MS Azure simple and straightforward. The tutorial will cover how to do interactive and asynchronous parallel computing with IPython, and how to get the most out of your IPython cluster. Some of IPython’s novel interactive features will be demonstrated, such as automatically parallelizing code with magics in the IPython Notebook and interactive debugging of remote execution. Examples covered will include parallel image processing, machine learning, and physical simulations, with exercises to solve along the way.
Introduction to IPython.parallel
Using DirectViews and LoadBalancedViews
The basic model for execution
Getting to know your IPython cluster:
Working with remote namespaces
AsyncResult: the API for asynchronous execution
Interacting with incomplete results. Remember, it’s about interactivity
Interactive parallel plotting
More advanced topics:
Using IPython.parallel with traditional (MPI) parallel programs
Debugging parallel code
Minimizing data movement
Caveats and tuning tips for IPython.parallel
Ceph is a mature platform for software-defined storage environments scaling to dozens or hundreds of PetyBytes. However real life implementation, operations and maintenance are complex tasks. Ensuring the right compatibility of software and hardware, avoiding bottlenecks between storage nodes, keeping the complete stack running, exchanging end-of-life hardware and operating the complete stack efficiently may create substantial efforts and risks for IT. Playing around with petabyte-scale storage is no option and reliable service levels are key. Fujitsu presents an easy solution how to move from “Build your own disaster” to an enterprise class service level way of using Ceph based storage.