Admix announces $25,000 User Acquisition Boost program

Admix is offering AR and VR developers $25,000 for user acquisition.

admixAdmix, the in-game monetization solution that recently closed a $7m Series A, is offering developers a chance to win an up to $25,000 boost towards their user acquisition.

To participate, studios need to install the Admix SDK to power non-intrusive ads in their games, and monitor progress over the first six weeks. Admix will handpick apps with the most potential.

Developers can sign up here to the program and boost their acquisition.

Founded in 2018, Admix is a full stack monetization platform for game, esport, virtual reality and augmented reality developers wishing to monetize in a different way. Instead of traditional adverts such as banners or video interstitials, Admix’s SDK for Unity or Unreal enables developers to drag and drop posters, billboards or even 3D spaces in their games.

This ‘virtual estate’ is then sold programmatically to thousands of advertisers, a big differentiator to many competitive services acting as agencies. The Admix platform attracted over 200+ mobile publishers in less than a year, a growth that attracted VCs to lead a Series A round.

Recently, the Admix team has bolstered it’s partnership team with hires from AppLovin and inMobi. This team will evaluate performance of new apps joining the platform, monitoring weekly userbase growth, session time, retention, and monetization vitals via the Admix solution.

They will then award these UA grants to the most promising ones, and help acquire users via a mixture of sources, including their own in-game inventory – something that Admix is currently testing.

“While monetization has been our bread and butter, we have always tried to help the developer ecosystem, by sharing resources, or featuring indie developers. This is the next step, with us contributing financially towards the app success. We couldn’t be more excited about accelerating the next hit games” said CEO Sam Huber.

Although the program is open to all developers, including new apps, the terms and conditions of the offer indicate that Admix is mainly looking for apps with early traction, and strong unit economics, that they can accelerate through funding and UA expertise.

About Admix 

The company launched publicly in November 2018 by revealing the first-ever advertising campaign in VR with National Geographic and Oath (now Verizon).

They have since expanded to traditional games, and less than 18 months later, the company is working with over 200 game developers, including leading casual titles. Admix is doubling its inventory size every 2 months, which attracts over 500 advertisers monthly, including Fortune 500 brands like Amazon, Spotify, Uber and Universal.

>About Samuel Huber

Samuel Huber is the founder of Admix, a monetisation platform for games, VR and AR based in London. Previously, Sam was the owner of an indie game studio, building hyper-casual mobile games before they were cool. Today, Sam regularly speaks about game monetization, advertising and extended reality at conferences across the world.

Irish eyes are smiling after global tech giants support

IrelandWordCloudA group of global tech companies has thrown its weight behind an initiative to promote awareness of the booming technology sector in Ireland.

Companies involved include Facebook, Google, Microsoft, Oracle, Symantec, Twitter and Intel. Despite the economic downturn, Ireland’s technology sector has continued to grow with more than 74,000 employed directly and more than 200,000 indirectly. Continue reading

App developers cash in as wages go up 27% in 2012

The average annual pay for app developers has shot up 27% in just the last 12 months to £70K, up from £55K for senior developers,

According to a ReThink report, annual salaries for app developers, typically a programmer in an in-house web-team or a consultancy, easily outstrip salaries for web developers without app development skills. Salaries for traditional web developer roles average £60K. Continue reading

Want a job in London’s Tech City? How about unlimited holidays?

Crisis, what crisis? As the non-digital element of the UK population struggles with unemployment, shrinking incomes and a divide that is more discrete than the north/south house price divide, others ain’t doing too badly.

In London and the (some would say fool’s) paradise of Shoreditch where the streets are paved with (some would say fool’s) gold, those young people who listened at school are being offered all types of perks to join start-ups.

According to a poll conducted by Silicon Milkroundabout, some start-ups are tempting the best developers away from the City with benefits that include not only health and dental packages, equity and free travel, but also unlimited holidays.

At this weekend’s jobs fair, arranged by the aforesaid Silicon Milkroundabout (please change that name), there were more than 800 jobs on offer from 130 companies with average salaries for developers clocking in at an average of £34,000 and a top-end wage of £85,000.

Perks on offer included free health insurance, equity stakes, gym membership and ‘the computer of their choice’, whisky club membership, unlimited holiday, travel abroad, music festival tickets, wake-boarding trips and even remote control helicopters.

Silicon Milkroundabout, the brainchild of Songkick co-founders Pete Smith and Ian Hogarth, claims that its job fair has not only helped people find hundreds of jobs, it has also saved start-ups more than £5 million in recruiters’ fees.

“Tech City will only continue to grow, as will the demand for talent. Start-ups still need to be inventive in what they can offer employees that the City and big tech companies can’t, and the ultimate benefit is the chance to play an integral part in a growing company,” said Pete Smith, Songkick co-founder.

This must seem like another world to a struggling family in Halifax where broadband brings to mind a pair of trousers rather than a career choice but for now a note to budding developers with all this choice in front of them.

Don’t take the remote control helicopter. I know it looks clever and you can show it to your friend(s), but go for unlimited holidays, trust me, you won’t regret it.

The future is Apple TV, but the TV comes second to the on-sofa device

This guest post is by Jeremy Allaire, CEO and Chairman of Brightcove who blogs here

Earlier this year we announced and demonstrated a new capability within our App Cloud platform for creating Dual Screen Apps for Apple TV. This got a lot of attention and opened up people’s minds to the way in which our lives are being transformed with tablets and TVs.

A lot of the focus was on how people might provide apps that support watching video content, which is a natural use case. The opportunity for dual screen apps is a larger opportunity to transform how content and applications are experienced, whether in the home living room, the office meeting room, the classroom, the retail store, the hospital, and any other context where people are interacting around content and information and where that information would benefit from rendering and display on a large screen such as a TV monitor.

To better understand this concept, it’s necessary to step back and reconsider the nature of how we write software and the user experience model for software.

Today, the predominant user experience model for software and applications online is a single screen. We browse to Web applications on a desktop PC, mobile browser or tablet browser and interact with and consume content and applications on that screen.

It is very much a single, individual user task. Likewise, we install apps onto these devices and consume and interact with information, perform tasks, make purchases, etc. through these apps. Again, this is a solitary single individual task.

As a result, when software creators plan their applications, they are typically designed and developed with this single user, single screen concept in mind.

Dual screen apps change all of that by shifting the software and user experience model from one user to potentially many, and from one screen (PC/phone/tablet) to two screens (phone/tablet AND TV monitor). From a software development and UX perspective, the large monitor (which is the true 2nd screen, vs. the standard concept that puts the tablet as the 2nd screen) becomes an open computing surface where one can render any form of application functionality, information, data and content.

Importantly, designers and developers need to shed the concept that “TVs” are for rendering video, and instead think about “TVs” as large monitors on which they can render applications, content and interactivity that is supported by a touch-based tablet application.

While we have the greatest affinity for large monitors as fixtures of the living room, increasingly flat-screen monitors are a becoming a ubiquitous part of our social fabric. In fact, large monitors often sit at the center of any social setting. In the home, these large monitors provide a social surface for those sharing the living room space. Increasingly, monitors are a common part of nearly every business meeting room space — not for watching video, but for projecting shared content and business data and presentations that support business and organization collaboration.

Likewise, monitors are in medical and hospital settings providing visual information to patients. They are increasingly in nearly every classroom, whether through a projector or an actual TV monitor and support the presentation of information that is needed for a collection of students. Large monitors are increasingly ubiquitous in retail settings as well.

The key concept here is that this pervasive adoption of TV monitors is the tip of the spear in creating a social computing surface in the real world. Forget about social networks that connect people across their individual, atomized computing devices, the real social world is groups of people in a shared space (living room, office, classroom, store, etc.) interacting around information and data on a shared screen.

Until very recently, the way in which these TV monitors could be leveraged was limited to connecting a PC through an external display connector to a projector or directly to a TV. The recent breakthrough that Apple has fostered and advanced more than any other computing or CE company is AirPlay and associated dual screen features in iOS and Apple TV.

Specifically, Apple has provided the backbone for dual screen apps, enabling:

* Any iOS device (and OSX Mountain Lion-enabled PCs) to broadcast its screen onto a TV. Think of this as essentially a wireless HDMI output to a TV. If you haven’t played with AirPlay mirroring features in iOS and Apple TV, give it a spin, it’s a really exciting development.

* A set of APIs and an event model for enabling applications to become “dual screen wware” (e.g. to know when a device has a TV screen it can connect to, and to handle rendering information, data and content onto both the touch screen and the TV screen).

With the existing Apple TV unit sales already outselling the XBox in the most recent quarter, we can see a world that goes from approximately 5M dual-screen capable Apple TV’s to potentially 15-20 million in the next two years, and eventually to 30-50 million units as new and improved versions of the Apple TV companion device come to market.

As a result, it’s an incredible time to experiment with this fundamental shift in computing, software and user experience, to embrace a world where the Tablet is the most important personal productivity device, and the TV is a rich and powerful surface for rendering content and applications.

As we rethink the TV as a computing surface for apps, it’s really helpful to have some ideas on what we’re talking about. Below are a series of hypothetical examples of what is possible today and of course what will be even bigger as these new dual screen run-times proliferate.

Imagine that you’re looking into buying a house. You open your tablet app from a reputable home listing service and perform a search using criteria that you care about and begin adding potential fits to a list of houses you’d like to explore. When you select a specific house, the app detects you’re connected to an Apple TV and launches a second screen on the TV that provides rich and large visual displays about the house — HD quality photos and contextual information about the house. Here, the power of dual screen is the fact that you and your spouse can sit in the living room and explore a house together without crouching over a computer or tablet on someones lap, and the house can be presented with HD quality media and contextual information.

Imagine launching the BMW app on your tablet and deciding to both learn about car models and configure a car. Like buying a house, often a ‘social’ decision between partners. On the TV, the app renders a high quality rendition of the car.

As you explore the car’s features from your tablet, associated media (photos, video and contextual meta-data) render onto the large TV in front of you. As you configure your car using your tablet, it updates a visual build of the car on the large screen — not sure what a specific feature of the car provides, it provides an inline HD video onto the big screen.

Looking to introduce your three-year old to key cognitive development concepts? Launch a learning app where the child interacts with the tablet application and sees visual information, animation and other content on the TV screen. Their touches on the tablet instantly produce rich and relevant content on the TV screen. Learning to count? Feed cookies over AirPlay to the cookie monster on the TV who eats and counts with you. Learning about concepts like near and far? Tap the table to make a character move closer and away from you. Build a character on the tablet and watch the character emerge on the TV screen.

As a sales manager, you walk into your team conference room with a TV monitor mounted on the wall. You kick open your tablet app on your tablet and begin filtering and bringing up specific reports on your tablet, and with the touch of a button you push unique visual reports onto the shared surface of the conference room TV. Here, the sales manager wants control of the searches and filters they have access to and only wants to render the charts and reports that are needed for the whole team to see.

Imagine playing Monopoly with your family in the living room – one or two or maybe even three touch devices present (phones, iPod Touches, iPads) – each player has their inventory of properties and money visible on their device. The app passes control to each user as they play. On the TV screen is the Monopoly ‘board’ with a dynamic visual that updates as users play – the movement of players, the building up of properties and so on.

A teacher walks into a classroom with an Apple TV connected to a HDMI capable projector that projects onto a wall or screen. From their tablet, they pull up an application that is designed to help teach chemistry and the periodic table – they can control what element to display up on the screen, which on the TV display provides rich information, video explanations and so on.

The app is designed to provide ‘public quiz’ functionality where the TV display shows a question, presumably related to material just reviewed or from homework, students raise their hand to answer and then the answer and explanation is displayed.

You are meeting with your doctor to go over test results from an MRI scan. The doctor uses his or her tablet to bring up your results, picks visuals to throw onto the TV monitor in the room, then uses his or her finger to highlight key areas and talk to you about they’re seeing.

You’re at a Best Buy and interested in buying a new high quality digital camera. A sales specialist approaches you with tablet in hand and asks you a few questions about what you’re interested in while tapping those choices into their tablet app. From there, it brings up on a nearby TV display a set of options of cameras — based on further probing, they drill into a specific camera choices which brings up a rich visual with a video overview of the specific camera that you’re interested in.

A major revolution has just broke out in a nation across the planet. Time Magazine has captured incredible audio, photos and video of the events. You and your friends sit down in front of the TV to learn more.

You open the Time Magazine tablet app and bring up a special digital edition about the revolution. From the tablet, you flip through and render onto the TV rich HD quality photographs, listen to first hand audio accounts (accompanied by photos) and watch footage from the events. The app renders a huge visual timeline of the events that led up to the revolution. It’s an immersive media experience that can be easily shared by friends and family in the living room.

Last but not least, of course, Dual Screen Apps will be essential to any app that is about consuming video – whether a news or magazine app, a vertical website (think,, and so on), or a catch-up TV app from a TV network or show that you care about.

You open the app on your table to explore what to watch, and when you’re ready to watch the show instantly pops onto your TV in gorgeous HD quality, and the tablet app becomes your remote control and presents relevant contextual information about the video, episode or what have you.

Virtually every application that exists on the Web and Phones and Tablets likely has a dual screen use case. Simply put, Web and app designers and developers need to imagine a world where the tablet and TV are a single run-time for their applications which each screen providing distinct value for the user controlling the app and the user consuming rich media and information on a large display.

Sometimes this is just one person (like picking and watching a show or playing a game or learning something), but crucially and very often I believe that these apps will be designed with multiple users — and a social context — in mind.
What are we doing about this?

A few months ago, we launched App Cloud Core (learn more here), a free online service and open source SDK that empowers Web and app developers to build cross-platform native apps (aka hybrid native apps) for iOS, Android and Apple TV.

App Cloud Core includes a suite of developer tools for building, testing, debugging and compiling hybrid apps in the cloud using HTML5, CSS3 and JavaScript. In the App Cloud Core SDK we include rich libraries for using native device APIs (and soon writing native code plug-ins), for constructing rich touch-centric user interfaces, and data services libraries for things like content caching, offline usage, file downloads and other key capabilities.

We’re trying to help unleash the talent and creativity of the millions of existing Web designers and developers into the market of native app development for key consumer devices. You can think of App Cloud Core as a really strong, feature rich and free alternative to Adobe’s PhoneGap and Appcelerator’s Titanium.

As part of this, we’ve released a set of additional APIs that enables developers to use HTML5 to build dual screen apps that work on iOS and Apple TV. We’ve abstracted the screen detection and communications plumbing, and updated our app development model to incorporate what we call multi-view applications (see simple API doc here) which enables a developer to use an HTML5 view on the TV and provides a simple means for each screen to communicate data and events to each other. (If you’re a developer, you can also find simple reference example source code for both Dual Screen Video and Dual Screen Web Views on our Github repo).

Specifically, as a developer, let’s say you have an App that has several views and sections that can be used on a phone or tablet. Using the new dual screen multi-view APIs, your app can detect when a user has a connection to their Apple TV and automatically render a unique view or interface onto the TV screen, enabling all of the kinds of examples illustrated in the scenarios above.
Where is this going?

This is such a groundbreaking approach to apps and software we expect lots of others to try and emulate what Apple is doing. Already, Microsoft is promoting the ability to use its Surface Tablet in conjunction with apps built for the XBox. Samsung has introduced features in its tablets and TVs to enable easy media sharing from your tablet or phone onto a Samsung Smart TV, and surely Google will follow suit with similar features to AirPlay in the Android OS.

While for now we’ve made a bet on Apple’s innovation in this space, we’re deeply committed to the idea of easily and cost effectively building cross platform apps and will monitor these developments closely and support them if it makes sense for developers and end-users.

Additionally, Apple is still early in deploying this technology, and it is a little bit hidden from end-user view — e.g. learning how to turn on AirPlay on a phone or tablet is somewhat hidden in the iOS environment. There are also bugs with how it is implemented, sometimes resulting in flaky behavior. We expect major changes as well as native code capabilities directly on the Apple TV, and we’ll be surely evolving our SDKs to stay on top of any changes that do come down the pike.

However, it’s very clear to us that there is a great future in dual screen apps, and we are excited to encourage designers and developers to start exploring and digging in and experimenting with what we expect to become a fundamental new paradigm in consumer software and media.