Adrian Kwok

Hello! I am a full-stack software developer based out of Vancouver and I specialize in creating feature-rich interactive web applications. I am currently the lead developer of Active Textbook, an application that increases student learning engagement and is used worldwide by major educational platforms.

So, who am I?

I am currently a software developer at Evident Point in beautiful Richmond, British Columbia. My focus is on creating purposeful interactive web applications; my current work on Active Textbook allows me to wear many hats – one day I'll be knee-deep hammering out code, the next I'm writing detailed integration specs for a new cool feature our users want, and another I'm meeting up and gathering requirements with prospective clients. Most importantly, my work has a direct impact on helping students enrich their learning experiences, which ultimately makes it rewarding to come in to work every day.

In 2011, I received my Master of Science degree from the School of Computing Science at Simon Fraser University. Before that, I completed my undergraduate studies with a Bachelor of Computer Science degree from the University of Waterloo. My full 6 years in academia has resulted in a solid fundation in theoretical CS that I am now rounding-off with practical experience in the workforce.

During my time as a graduate student, my research interests focused on human factors in computing, ranging from user-centered design, novel interaction techniques, and improving user experience on mobile devices. I conducted post-graduate research at the SYNAR lab, working on the Green Phones project led by Dr. Arrvindh Shriraman and Dr. Brian Fraser. Our team focused on concisely understanding how energy is consumed on (then) modern Android-based smartphones and attempted to provide smart kernel-level optimizations for a phone’s various hardware components in order to maximize battery life.

In my spare time, I enjoy attempting to overcome gravity, tinkering with computer hardware to make them inaudible, messing around with DSLRs, maximizing musical enjoyment through exotic headphones, and playing the drums poorly. I am also crazy about anything related to hockey and, aside from that one embarassing day in June 2011, a proud supporter of the Vancouver Canucks.

Projects
Things I've worked on from my days in academia

As part of my work on the Green Phones project, led by Dr. Arrvindh Shriraman and Dr. Brian Fraser, I was actively involved in developing both user and kernel-level applications for Android smartphones. The Green Phones project aimed to understand and concretely define factors that could influence overall power consumption in modern Android-based smartphones, with the ultimate goal of being able to provide strong power consumption estimates based on low-level usage statistics – such as instruction fetch misses or TLB cache misses – for any given application available on the Android Market.

The result of this work was intended to allow users to be mindful of applications that are unnecessarily power-hungry. We then utilized these gathered statistics to create smarter kernel-level optimization schemes custom tailored for specific mobile phones, and to also provide a set of guidelines for software developers to use in order to write more energy-efficient software.

I initially began work on the Green Phones project to fulfil course requirements for a multicore architectures seminar; through the completion of this work, I successfully developed several fine-grained user-level component stressors via a custom Android application, created an accurate and easy to use power measurement and gathering tool using inexpensive off-the-shelf hardware components, and essentially assembled all the necessary tools and testbeds required to begin work on the next (unfinished) phases of the project which I continued work on as part of post-graduate research.

For my last course as a graduate student, I enrolled in an autonomous robotics seminar out of pure interest and a life-long fascination for robotics. The course focused on cutting edge technical research, covering behaviour-based robotics; several navigation and localization schemes utilizing nearness diagrams, vector fields and dynamic windows; task coordination in multi-robot systems; and evolutionary robotics. The most notable takeaway from the course was the importance of “keeping things simple” – one does not necessarily need to come up with complex robotic controllers to achieve complex behaviour. Such an example is stigmergy, in which individuals in a multi-agent system communicate with one another indirectly using the environment to achieve a seemingly complex global solution, without using any planning or explicit coordination.

For a semester-long project, I tackled the problem of energy-constrained foraging in tight environments. Using C++ and Stage 4.0.1, I built controllers for two types of robots — a large tanker robot which acted as a mobile storage container and refueling station, and several smaller helper robots which conducted the actual foraging. These controllers are robust due to their simplicity; while complex in their initial versions, the set of governing rules was refined in an iterative process until only a select few simple rules remained while still fulfilling environmental restrictions. The end result of my work is a system that is more effective, efficient, and robust than traditional single-robot foraging architectures.

As one of the more traditional NP-Complete problems in computing science, finding fast and exact solutions to the Traveling Salesman Problem (TSP) is inherently difficult, and as such is a prime candidate for approximated parallelized solutions. For one of my assignments in a graduate multicore architectures course, I was required to produce two unique parallel solutions to the TSP, with each solution parallelized using traditional POSIX threads as well as via the simpler OpenMP API — and was required to compare the performance between these four implementations. As there are already several well defined — albeit slow and for the most part, boring — exact algorithms in the literature, I instead opted to utilize two unorthodox and much more interesting heuristics to solve the TSP.

For the first heuristic, I used an ant colony optimization heuristic, which employs multiple pheromone-laying simulated ants to arrive at a usually good — yet completely unbounded — approximate solution for the TSP. For the second heuristic, I implemented a genetic algorithm from the field of Bioinformatics to, after breeding several generations of possible salesman paths and utilizing a “survival of the fittest” metric, arrive at a decent, unbounded approximate path. Since most of the heavy computation for each of these heuristics is mutually independent from one another — that is, either from ant-to-ant or from breeding path-with-path — they lend themselves trivially to parallelization.

In modern-day database-centric web services, an extremely large number of concurrent transaction requests could occur in a matter of seconds — for example, an online store experiencing a sudden surge in the sales of a particular product. In Microsoft SQL Server 2008, a variety of different transaction isolation levels are offered to contend with these possible events; however, as with any concurrency control system, there is a tradeoff between the amount of concurrency allowed and the number of anomalies that may occur due to increased parallelism. For example, one can easily prevent all possible anomalies stemming from concurrent transactions from occurring just by making all SQL transactions completely serial, and likewise, one can easily allow a set of transactions to be run completely parallel if we ignore all anomalies and errors that may occur as a result. The important thing is to strike a balance by understanding how and why these anomalies may occur, and choosing an optimal isolation level that will prevent only those anomalies that are possible within a set of transactions, while maintaining the highest level of concurrency possible.

To show this, I empirically defined the performance degradation of choosing an isolation level that restricts the amount of parallelism more so than necessary – more specifically, a mock Java client was written which generated thousands of “heavy” concurrent SQL transactions acting on a singular table, with each transaction mimicking a type of anomaly that can occur in practice. The time it would take to complete all transactions was recorded under all of the different isolation levels provided by SQL Server 2008. The project was nontrivial due to the amount of resources that such an experiment would consume both client-side and on the database itself in order to replicate real-world scenarios.

For a semester-long project in a graduate-level Bioinformatics course, I focused on a problem that many of my classmates were facing – as the course was interdisciplinary, not all enrolled students were computer scientists: out of the approximately twenty students in the course, several were mathematics, biology, and even physics majors, and thus many students had trouble understanding the more complex algorithms covered in the course. Compounding this issue, many of these difficult algorithms relied on thinking multidimensionally – for example, as required in multiple sequence alignment – which, while intuitive to most graduate students in computing science, is inherently difficult to convey in a textbook or slides without strong visual support.

I attempted to solve this problem by developing a learning tool which allowed students to interactively run through difficult bioinformatics-related algorithms step by step while simultaneously showing what the algorithms – and their related data structures – are doing via interactive visualizations. The most challenging aspect to this project was unraveling the algorithms in such a way that it is easily comprehensible to students while still maintaining the integrity of the algorithms itself; in the end, I wrote three interactive visualizations in Java which focused on the branch and bound algorithm for the partial digest problem, the bounded algorithm for the median string problem, and the dynamic programming solution to the longest-common-subsequence alignment problem. Based on preliminary evaluation with my peers, the learning tool was a success, and would be well suited for even novice programmers.

Online video streaming and broadcasting has become increasingly prevalent in the past few years, with websites such as Twitch.tv receiving massive amounts of attention and traffic. However, without a large corporate backing resulting in considerable bandwidth headroom, more modest sites and applications face major issues in attempting to stream live multimedia to large populations via traditional client-server architectures. Fortunately, peer-to-peer (P2P) technology has emerged as a promising technique for providing large-scale media distribution systems over the Internet; although mostly widely used in the BitTorrent protocol, P2P can be adapted for the efficient delivery of live streaming, albeit with its own set of challenges and hurdles to overcome.

For a course project, I conducted a literature survey outlining current research topics in P2P streaming networks, with particular focus on tree and mesh-based architectures for both live and on-demand video streaming and the possible security flaws and solutions associated with these architectures. In traditional client-server based models, the bandwidth cost accrued by a content provider for disseminating large volumes of data – as is the case in video streaming – is significant and in most cases difficult to scale well with an increase in the number of clients. However, many security issues are introduced with the added complexity of peer-to-peer networks, as each client – that is, each node in the peer-to-peer network – is given the responsibility of forwarding data to other users; the literature survey discusses the merits and flaws of a variety of different proposed solutions which attempt to ensure fairness and safety including credit based models, bandwidth throttling protocols, and several data integrity and authentication mechanisms.

With the advent and popularity of social news aggregation sites such as Reddit, it has become difficult to differentiate submissions made by genuine users – usually for the betterment of the community – from stories that are submitted purely for monetary gain, often by so-called “power users”. Most importantly, aside from a handful of popular novelty accounts, it is often difficult to pinpoint exactly which users are “power users” or those who are behaving oddly or suspiciously; on Reddit, the notion of “Reddit karma” is not a sufficiently comprehensive metric. In this vein, I utilized the ideas of collaborative filtering and co-participation from the field of data mining to pinpoint specific users in the Reddit community which are deemed to be influential, inferred through their communication and connections with other users in the community, as well as their day-to-day activities on Reddit. By identifying such influential users from the community, the general public would be able to pay closer attention to their behaviours and quickly respond to suspicious activities.

As a direct result of this work, a robust mathematical model was formulated describing the framework required to empirically define the influence of all users on Reddit – an inherently difficult problem since one must also be mindful of fake user accounts which try to manipulate the system to give the illusion of influence, and as well, the fact that the vast amount of information regarding any given user is hidden from public eyes and has to be inferred via secondary means.

In 2008-2009, music and rhythm video games such as Rock Band and Guitar Hero were extremely popular with both children and adults; most notably, these games motivated people to attempt to learn new instruments – the most apparent being the transition from drumming in Rock Band to learning to play on a real drum set. However, the mapping from the plastic guitar controller to a real guitar was nowhere near as natural as that from the plastic drums to a real drum set; in this vein, I aimed to bridge this gap by taking advantage of the guitar controller’s inherent simplicity and familiarity to allow users to create simple, yet expressive music.

I utilized all the controls available in a standard Guitar Hero controller and mapped them to the 48 possible fundamental frequencies of an acoustic guitar – I created a Java application which used the JSyn API, a real-time unit generator-based synthesis engine, in conjunction with a modified Karplus-Strong synthesis technique to mimic the sound of an acoustic guitar at different frequencies and with varying effects – such as note decay and whammy effects. To interface with a Guitar Hero controller, I used the JInput library, which is a universal interface for external gaming peripherals under Java. The end result was a familiar plastic guitar game controller that would play realistic guitar sounds at different frequencies depending on the plastic buttons being held and the tilt of the guitar using its built-in accelerometer, allowing for a user to experiment and create simple compositions without prior musical knowledge.

Resume

A version of my resume should be visible below using Google's embedded document viewer. If it doesn't, please click here to download my original resume in PDF format.