0%

This story will show you how to use Netcat to send and receive TCP/UDP packets.
We’ll focus on a specific example… we’ll simulate a Statsd client/server.

What is Netcat? Netcat is a featured networking utility which reads and writes data across network connections, using the TCP/IP protocol. Designed to be a reliable “back-end” tool, Netcat can be used directly with other programs and scripts to send files from a client to a server and back. At the same time, it is a feature-rich network debugging and exploration tool that can specify the network parameters while also establishing a connection to a remote host via a tunnel.

Read more »

I still remember when I joined the Hotels.com™ (part of Expedia Group™) technology team and how excited I was to start such a new challenge. Since the very early days, I noticed that working in a global company is quite different from working in the same building. That is obvious, I know, but it wasn’t easy to get used to the new way of working.

I missed getting in touch with new teammates, and sometimes felt isolated from others and had the feeling of being a remote worker. Later on, only when I started working from home for 2 days per week, I realized that working in a global team has a lot in common with the remote work.

Read more »

Using microservices is mainstream nowadays and them bring several challenges for the software engineers: operations and infrastructure, security, monitoring, caching, fault-tolerance, and so on.
In particular, having under control the communication between microservices is the key to build reliable reliable services.

In the Java world there are around several solutions for this purpose but, in this post, I’d like to analyze how Hystrix leverage the “command pattern” to accomplish this goal.

Read more »

Nowadays, most blogs are powered by Wordpress. I am a Wordpress users too and I have to admit it is really a great for blogs.
As others CMS, Wordpress requires a database and PHP in order to process the dynamic pages server-side.
Jekyll is a static site generator. With it I can generate all my blog pages in on my computer and then publish the entire website on a static hosting server.

Read more »

Da REST a GraphQL in 90 minuti

21 Dicembre 2017, Hotels.com Technology Rome Office

Il team HCOM technology di Roma vorrebbe condividere con voi le esperienze fatte con GraphQL, e discutere su come questo linguaggio possa essere favorito rispetto all’ormai consueto paradigma REST.

Durante l’incontro faremo una introduzione ai principi base di GraphQL, parleremo di come si definisce uno schema per descrivere i dati e di come è poi possibile eseguire delle interrogazioni su di esso. Subito dopo seguirà una parte pratica durante la quale implementeremo un servizio GraphQL utilizzando l’implementazione javascript.

Read more »

What is GraphQL? The draft RFC specification (October 2016), defines it as “a query language created by Facebook in 2012 for describing the capabilities and requirements of data models for client‐server applications”. More simply, GraphQL is a language specification for API. It defines in which way the client should query the server, and in which way the server should execute those queries.
Read more »

Aspect-Oriented Programming (AOP) powerfully complements Object-Oriented Programming (OOP) by providing another way of thinking about program structure.

Drawing a comparison between AOP and OOP we can say that the key unit of modularity in OOP is the class, whereas in AOP the unit of modularity is the aspect. With aspects, you can group application behaviour that was once spread throughout your applications into reusable modules. You can then declare exactly where and how this behaviour is applied. This reduces code duplication and lets your classes focus on their main functionality.

Read more »

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

This post describes how to run Docker machines with the help of Boot2Docker.

Read more »

This post is about how to plan, for the first time, a cluster for Apache Hadoop and HBase. Hadoop, together with its friends, enable us to elaborate a large amount of data in a cheaply way: by large I mean data large about 100 gigabytes and above.

Hadoop implements the MapReduce framework, that is a way to take a query (or Job) over a dataset, divide it in several queries (or Tasks), and the run these queries in parallel over multiple node of a cluster. Nothing new until now, this looks like the divide-et-impera paradigm: the innovation lies in the fact that the cluster node that is in charge of executing a task has already the data on which process the query. So we are not moving data in order to elaborate them, but we’re assigning task on the right cluster node that already has the data!

Read more »