
Spyk is a platform that combines proprietary Sparkline IP to offer State of the Art methods across the technology stack aims to enable the scale and accessibility that businesses desire in empowering their teams to do their best work with little or no involvement required from specialists. In our initial testing against common scenarios involving data science projects, we see a significant speedup in time spent per project, reducing atypical project that takes months to plan, coordinate and execute down to a few minutes. This, while operating on Terabyte scale data with efficient, compressed storage and cutting edge statistical frameworks running the desired models on data.
We live in wonderful times. In the past two decades alone, a blink of an eye in the context of human history, we’ve arguably seen some of the largest improvements in living standards globally thanks to the mass proliferation of internet enabled devices. This has led to a significant majority of the worlds population now having ready access to near infinite streams of information on-demand. Conversely, this also implies that more people than ever before are generating information on their experiences with other entities over non-traditional channels. As with any technological revolution, Businesses, Academia, Governments and other institutions recognize that faster adoption of the advancements in tech confer a strategic advantage that is typically of a permanent nature. And very few people would argue that, with the pace of advancement being as fast as it seems, standing still is akin to moving in reverse relative to the broader market. As a consulting firm operating in the data analytics space, Sparkline has had the privilege of working with some of the largest businesses and entities in the SEA and ANZ markets, watching this phenomenon take shape in front of our very eyes.?
From the rapid growth in the demand for quality talent to deliver on advanced data projects to the increase in complexity and scope of these projects to the acceleration of the investments being made into all parts of the technology stack, all measures indicate a strong vote of confidence in the value derived from investments in this space.
The best description of these effects can be seen in the sketch above which illustrates “Elevating the discussion” in businesses that invest in extracting value from information. At the lowest layer, the discussions that employees typically have are around the “mechanics” or “logistics” of the data itself rather than higher level business problems. This is mainly due to the significant effort and skill required in abstracting away this layer and working at a higher level of complexity.
As businesses gain expertise in, or deploy more advanced technology to make sense of the information at their disposal, we typically see conversations shift towards “higher-level” topics that are closer to the goals that businesses care about rather than the noisy detail of the lower layers where, the problems needing to be solved are typically of a mechanical nature that aren’t directly tied to solving problems faced by the business.
This brings us to one of the biggest areas of debate within companies planning investments in the data space. How does one strike the balance between hiring for and building expertise in the core fundamental skill sets required for data science and, at the same time, staff up teams with business-specific skills to make strategic decisions on the output of the former? An airline, may, for instance have a finite amount of time that data scientists may spend on problems and if the demand for the time exceeds supply, the likely outcome is projects within a few teams will get prioritized out.
Spyk is a platform that combines proprietary Sparkline IP to offer State of the Art methods across the technology stack aims to enable the scale and accessibility that businesses desire in empowering their teams to do their best work with little or no involvement required from specialists.
In our initial testing against common scenarios involving data science projects, we see a significant speedup in time spent per project, reducing a typical project that takes months to plan, coordinate and execute down to a few minutes. This, while operating on Terabyte scale data with efficient, compressed storage and cutting edge statistical frameworks running the desired models on data.
At present, we support reading in data from all major databases within the major cloud providers (AWS, GCP and Azure). The platform automatically applies efficient encoding to the data read to make future data access faster. We allow standard data manipulation and exploration right in our tool and offer a point and click interface to train most models you may recognize from popular frameworks such as Scikit-learn or Tensorflow.
We see significant benefits in this setup, with our platform abstracting away the lower level problems of infrastructure, setup and necessary expertise in getting things right (The not so glamorous part of machine learning). This empowers users of Spyk to also run multiple models in parallel or iteratively train multiple versions of the same model to reach the best outcome.
We’re in a limited beta rollout with Spyk at the moment with some of our larger clients and would love for you to be a part of our journey! To sign up for a free trial, please reach out to analytics@sparkline.com