Categories
Uncategorized

I have a stream: Going reactive with Spring Data

I have a stream: Going reactive with Spring Data

Mark Paluch

Mark Paluch is a Software Craftsman working as Spring Data Engineer at Pivotal and lettuce Redis driver Project Lead. He is a member of the CDI 2.0 expert group and passionate about open source software.

Data access and application scalability are closely related. Applications employ threads until they are done with their work while most of the time is waiting for I/O. Reactive infrastructure shifts responsibilities to where they can be handled best. It’s a move towards data streaming that does not require upfront fetching and therefore optimizes memory and computational resources. This talk covers what a stream is and how reactive data access leverages scalability bounds by applying the most natural way of data access with Spring Data and Project Reactor. If you are a developer looking to consume data in a functional reactive style, this is your chance to gain the experience how your application can benefit from streaming data access and to learn why not everything should be reactive.

Categories
Uncategorized

Turbo Charge CPU Utilization in Fork/Join Using the ManagedBlocker

Turbo Charge CPU Utilization in Fork/Join Using the ManagedBlocker

Apache Zeppelin, the missing GUI for your Big Data back-end

Heinz Kabutz writes the popular “The Java Specialists’ Newsletter” read by tens of thousands of enthusiastic fans in over 138 countries. To sign up, visit http://www.javaspecialists.eu

Fork/Join is a framework for parallelizing calculations using recursive decomposition, also called divide and conquer. These algorithms occasionally end up duplicating work, especially at the beginning of the run. We can reduce wasted CPU cycles by implementing a reserved caching scheme. Before a task starts its calculation, it tries to reserve an entry in the shared map. If it is successful, it immediately begins. If not, it blocks until the other thread has finished its calculation. Unfortunately this might result in a significant number of blocked threads, decreasing CPU utilization. In this talk we will demonstrate this issue and offer a solution in the form of the ManagedBlocker. Combined with the Fork/Join, it can keep parallelism at the desired level.

Categories
Uncategorized

Developing modern web applications using Meteor and MongoDB

Developing modern web applications using Meteor and MongoDB

Apache Zeppelin, the missing GUI for your Big Data back-end

Aleksey Savateyev is a senior architect at MongoDB and works with partners and Fortune 100 companies on building their applications on top of MongoDB database. His primary focus is building web and cloud services – he served as a Director of Product Management for Yahoo’s private cloud, as well as senior software architect for Microsoft Azure, built startups and development teams in few companies and holds a master’s degree in Computer Science.

This talk will walk you through the process of building fully-featured application with Meteor and MongoDB and show tips and tricks to scale and optimize this application once it becomes widely used.

MongoDB is a leading NoSQL database in the world and a primary choice for web developers. Meteor is one of the most advanced full stack development frameworks that has MongoDB at its core (both on server and the client!). This workshop will walk you through the process of building fully-featured application with Meteor and MongoDB and show tips and tricks to scale and optimize this application once it becomes widely used.

Categories
Uncategorized

10 things I learned after doing 2400 code reviews in 6 months

10 things I learned after doing 2400 code reviews in 6 months

Apache Zeppelin, the missing GUI for your Big Data back-end

Patroklos Papapetrou is a chief software architect, addicted to software quality and an agile team leader with almost 20 years of experience in software development. He believes and invests in people and team spirit seeking quality excellence. He’s one of the authors of SonarQube in action book and he recently founded his own consulting and training company. He’s an occasional speaker giving talks about clean code, code quality, software gardening and other cool stuff he wants to share with fellow developers

I work remotely as part of a distributed company and I have the chance in my current job to review almost all pull requests submitted by several developers. During the last six months I learned a lot both as a reviewer and a developer and this what I want to share!

I will discuss about things I learned to improve myself as a reviewer, and I will present automation and tools integration ideas to facilitate the code review process. I will also explain the most common coding mistakes I have noticed, developer patterns , human habits and how all this made me a not only a better code reviewer but also a better developer. It’s going to be a fun talk with some real code review examples that I hope people will enjoy. After all code reviews should be fun and not a necessary evil.

Categories
Uncategorized

Extending DevOps to Big Data Applications with Kubernetes

Extending DevOps to Big Data Applications with Kubernetes

Nicola Ferraro

Nicola Ferraro is a senior software engineer at Red Hat. I’m an Apache Camel committer and a contributor of the Fabric8 project, a microservice development platform based on Docker and Kubernetes. I’ve a long background on Big Data systems, having built applications based on Spark, HBase, Kafka and Hadoop for years. I’m also the author of some open source projects related to Big Data application development.

DevOps, continuous delivery and modern architectural trends can incredibly speed up the software development process. Big Data applications cannot be an exception and need to keep the same pace. In this talk, Nicola Ferraro (Red Hat) will show an overview of the next generation cloud-native Big Data systems based on Docker and Kubernetes. Switching from static Big Data platforms to infrastructure as code allows releasing robust applications in minutes by minimizing the gap between all the environments (development, testing, staging and production), a typical problem in Big Data application development. The presentation goes through the full release cycle of a Spark application alongside a fleet of microservices exchanging data in near real time. A fully featured continuous delivery pipeline, provided by the Fabric8 platform, will automate every step of the build and release process. Spark clusters will come to life from nowhere and disappear when they are no longer needed…

Categories
Uncategorized

Effective Design of RESTful APIs

Effective Design of RESTful APIs

Apache Zeppelin, the missing GUI for your Big Data back-end

Mohamed Taman Co-Founder & CTO of PaySky International, an Enterprise Architect, a Java Champions, Adopts Java EE 7, 8 & JavaFX, JCP member, JCP Executive Committee member, JSR 354, 363 & 373 Expert Group member, MoroccoJUG member, EGJUG leader, Oracle Egypt Architects Club board member, speaks Java, love mobile, international speaker, Books Author, Won Duke’s choice 2015, 2014 awards, and JCP outstanding adopt-a-jar participant 2013 awards, IoT Geek.

A frequent speaker at Java/Oracle User Groups and conferences worldwide Including JavaOne (four times), JDC Egypt, Tunis JUG Day (twice), JEEConf (twice), jMaghreb (twice), 33’s Degree Poland 2014.
Got this year the most interesting and best talks at both Tunisia Esprit JUG Day 2014, and JEEConf 2014.
Have talks in RigaDevDays 2015, JPoint Russia 2015, JEEConf 2013-2015, Tunis JUG Days 2015, DWX Germany 2015, Devoxx Morocco, Belgium, and UK, JavaOne 2012-2016, JavaDays Ukraine Karkiev, Kiev, Voxxed Istanbul 2016, JFokus 2016.Developers creating websites needs to know how to build RESTful APIs correctly. This session will help you plan and model your own APIs and understand the six REST design constraints that help guide your architecture. Including an example that will brush everything up.

Developers creating websites needs to know how to build RESTful APIs correctly. This session will help you plan and model your own APIs and understand the six REST design constraints that help guide your architecture. I will starts with a simple overview, including advice on identifying the users or “participants” of your system and the activities they might perform with it. I’ll help you paper test your model, validating the design before you build it. You’ll then explore the HTTP concepts and REST constraints needed to build your API. Topics include: The three approaches to adding an API, Modeling tips, Creating and grouping API methods, Mapping activities to, verbs and actions, Validating your API, Working with HTTP headers and response codes, Caching, Layered systems, Creating a uniform interface. All those explanations topics will be provided by examples to clear up the idea and demonstrate the concept. So what are you waiting for ?! Click to enroll.

Categories
Uncategorized

Handling Billions Of Edges in a Graph Database

Handling Billions Of Edges in a Graph Database

Apache Zeppelin, the missing GUI for your Big Data back-end

Michael Hackstein is a Senior Graph Specialist @ ArangoDB GmbH. He holds a Masters degree in Computer Science and is the creator of ArangoDBs graph capabilities. During his academic career he focused on complex algorithms and especially graph databases. Michael is an internationally experienced speaker who loves salad, cake and clean code.

Modern graph databases are designed to handle the complexity but still not the amount of data. When hitting a certain size of a graph, many dedicated graph dbs reach their limits in scalability. In this talk I’ll provide an overview about current approaches and their limits towards scalability.

The complexity and amount of data rises. Modern graph databases are designed to handle the complexity but still not for the amount of data. When hitting a certain size of a graph many dedicated graph databases reach their limits in vertical or, most common, horizontal scalability. In this talk I’ll provide a brief overview about current approaches and their limits towards scalability. Dealing with complex data in a complex system doesn’t make things easier… but more fun finding a solution. Join me on my journey to handle billions of edges in a graph database.

Categories
Uncategorized

IoT and Edge Integration with Open Source Frameworks

IoT and Edge Integration with Open Source Frameworks

Kai Waehner

Kai Waehner works as Technical Evangelist at TIBCO. Kai’s main area of expertise lies within the fields of Big Data, Analytics, Machine Learning, Integration, SOA, Microservices, BPM, Cloud, Java EE and Enterprise Architecture Management. He is regular speaker at international IT conferences such as JavaOne, ApacheCon or OOP, writes articles for professional journals, and shares his experiences with new technologies on his blog (www.kai-waehner.de/blog). Find more details and references (presentations, articles, blog posts) on his website: www.kai-waehner.de

This session shows and demos open source IoT frameworks such as Eclipse Kura, Node-RED or Flogo built to develop very lightweight microservices, which can be deployed on small devices or in serverless architectures and wire together all different kinds of hardware devices, APIs and online services..

Internet of Things (IoT) and edge integration is getting more important than ever before due to the massively growing number of connected devices year by year. This session shows open source frameworks built to develop very lightweight microservices, which can be deployed on small devices or in serverless architectures with very low resources and wire together all different kinds of hardware devices, APIs and online services. The focus of this session lies on showing open source projects such as Eclipse Kura, Node-RED or Flogo, which offer a framework plus zero-code environment with web IDE for building and deploying integration and data processing directly onto connected devices using IoT standards such as MQTT, WebSockets or CoaP, but also other interfaces such as Twitter feeds or REST services. The end of the session discusses the relation to other components in a IoT architecture including cloud IoT platforms and big data respectively streaming analytics solutions..

Categories
Uncategorized

Spark Workshop

Spark Workshop

Dan Serban

Dan Serban is s a data engineer who occasionally teaches advanced data engineering workshops using Spark as the big data framework.

Interested in learning the practical applications of a modern, streaming data analytics pipeline? Meet Apache Spark, the big data framework that helps reduce data interaction complexity, increase processing speed and enhance data-intensive, near-real-time applications with deep intelligence.

This 2-hour, intensely hands-on workshop introduces Apache Spark, the open-source cluster computing framework with in-memory processing and streaming capabilities that makes analytics applications up to 100 times faster compared to Hadoop. The workshop is aimed at seasoned developers with an interest in understanding the streaming data pipelines that power today’s real-time analytics engines. Agenda Interactive Data Analytics Overview Creating Spark DataFrames From Publicly Available Datasets Spark Streaming Overview Time Series Analytics Overview Graph Analytics With Spark GraphX All the tools we use during the workshop will be inside one Docker container per attendee on a cloud server. This will make it possible for attendees to continue experimenting at home on their own laptops.

Categories
Uncategorized

Offline-first apps with WebComponents

Offline-first apps with WebComponents

AMahdy AbdElAziz

AMahdy AbdElAziz is an international technical speaker, Google developer expert (GDE), trainer and developer advocate. Passionate about Web and Mobile apps development, including PWA, offline-first design, in-browser database, and cross platform tools. Also interested in Android internals such as building custom ROMs and customize AOSP for embedded devices.

PWA, offline-first design, framework.JS …etc. A lot of hype words recently, or isn’t it? Let’s explore this in details and how it’s affecting both Mobile and Web development.

We will explore how to boost the usability of web and mobile-web apps by implementing offline-first functionalities, it’s the only way to guarantee 100% always on user experience. Low signal or no connectivity should no longer be a blocker for the user, we will discuss the available solutions for caching, in-browser database, and data replication. We will also take a look at how WC such as Polymer and Vaadin Elements help solving those issues out of the box. There will be a live coding demo to see how it’s simple to manipulate a large data, completely offline.