I have a decade of experience building customer-impacting Engineering solutions across Web and Mobile, leveraging data services, databases, and 3rd party integrations (e.g., Google Firebase, Dropbox, Amazon Web Services, Chef, and beyond). I am an active fighter for user-centered design and engineering, making data-driven decisions, and working across teams to clarify ambiguities and technical limitations. But mostly, I love learning new technologies and new ways of looking at things.
If you want a copy of my most recent resume, it's here.
Planned, created, tested, scheduled and executed a strategy to move our legacy components from a public VPC in AWS orchestrated using Chef to a Kubernetes Cluster deployable by Buildkite or Slack and monitored by Prometheus and alarms via Slack and Pager Duty.
Performed upkeep on Chef recipes and AWS Auto Scaling Groups to maintain confidence in our ability to withstand partial failure. Requires coordination with QA and DotCom teams to prevent hindering our development pipeline.
The final step in the move off of Chef. Removed all the old, now depricated, infrastructure. Cleaned up the code base to remove as much of the Chef code as possible (barring configuration files that are now repurposed.
Wordpress represented a complex set of working knowledge, security issues, and poor site performance. Replacing it would disrupt the first set, while delivering on the other two. Moving to Gatsby took a lot of work around tooling for the editorial team.
The previous iteration of our Template pages were rendered by Django. Moving these to static content gainst us a lot of SEO goodness and removes some points of failure.
Planned, created, tested and deployed new deployment pipeline to a new orchestrated infrastructure for our Legacy Python components.
Planned, created, tested and deployed a paired staging environment in Production reachable via cookie. This enabled the final stage of moving off of chef.
Planned, created, tested, scheduled and executed a planned maintenance window with an outage page for upgrading our production MySQL database to add ability to index tables without taking downtime.
The second piece of the legacy migration from Chef to Kubernetes was moving our S3 Proxy, responsible for uploading content to S3, and then making a route to the content avialable to remove issues with CORS.
Added monitoring and alarms using Prometheus to create metrics and report them to CloudWatch and StackDriver. Made dashboards available via Grafana, masked behind a Google OAuth login.
Slack is used often at PicMonkey. A number of integrations already existed. Enriching this channel with the ability to deploy to our kubernetes components had a paved path.
This piece happened to be the simplest to carve off and move over. This was the first real piece of our legacy stack moving over to Kubernetes. Coordinated with QA on how to effectively test this and get it out the door.
Generating new pages to describe our content via Gatsby. This requires routing updates in nginx, and configuring clearing our CDN on deploy. Generating sitemaps automatically and updating them on each deploy.
Our editorial team uses Contentful to create content. Building mulitple times a day means that the latest changes can be picked up.
A number of kubernetes base components have newer versions which leverage the latest kubernetes features. Upgrading these eases the burden of configuring our services.
We have had some bad deploys uncaught in Production for a number of hours. Being able to alarm on this early informs our ability to rollback.
Moving content off of Wordpress and onto a new staticly rendered framework called Gatsby. This requires routing updates in nginx, and configuring clearing our CDN on deploy. Generating sitemaps automatically and updating them on each deploy.
When we successfully separated the QA Legacy environments, they were still using one backend for our image services. To complete the silo'ing of our test environments, we needed to spin up new versions of the image services for each new QA environment.
During the security audit, permissions to access the production database were removed. This seemed to hamper the BI team. To re-enable them, we used a path for proxying Mode Requests through our Kubernetes cluster.
Our previous deploy method relied on uploading assets before the deploy step. This encountered issues due to improper salting. To further avoid the issue, changing where that step took place seemed like a good value add.
We had a number of BI processing jobs that extracted data from our production AWS RDS Database, and moved it to Google Big Query, where it was further processed these jobs to provide data for our dashboards in Mode Analytics.
Removing dependencies on Chef to be able to break up our legacy components for migration.
To move off of Chef, PicMonkey needed a new platform for orchestration and deployment. Kubernetes was elected as the most appealing option. Spinning up production worthy clusters in AWS was a blocker. Doing this in AWS was preferable to calling from Google Cloud apps to our database in AWS.
As part of the over-arching theme to get PicMonkey off of Chef and onto Kubernetes, all of our legacy components need to be able to run in containers.
Due to how our fleet utilizes Chef, it's best practice to make sure that we put aside time to make sure that newly provisioned hosts come into service properly without a hitch. This upkeep is normally done every January and normally requires fixing our recipes by upgrading package dependencies.
PicMonkey keeps a number of machine pools available to buildkite for build processing. These hosts can be less expensive if they are preemptible (they live at most 24 hours). Due to this lifecycle, it benefits us greatly to have the hosts be replaceable.
One of our competitors, Canva, was hacked. As a result, it inspired us to look into our own security. Ensuring that as many of our technical resource accounts have restrictive access and permissions.
Our QA team of 6 were struggling to time share on our single QA environment. Adding additional capacity for multiple in-flight branches to be tested by different people, increasing our capable throughput for features and bug fixes.
Seeing that there was market opportunity and it would benefit us to diversify, we started work on VidMonkey. Product Market Competitive Analysis (Technical, Positioning). Feature design for OTA (Over the Air). Firebase, Protobufs, User Flows. Feature design for Remote Configuration (Using Firebase). Feature design for Themes, Content Cards
We pivoted from the chat app we had constructed to a mobile version of PicMonkey to better align with our user's expectations. Worked closely with design to create a number of different features. Java/Kotlin, Objective-C/Swift, C++, Java Native Interface (JNI), SWIG, Protobuf, GLSL (basic shaders), Analytics for both platforms.
Added facial recognition into the app.
Launched: July 27, 2016
1.10.4 (Android) 3.9/5.0
1.10.2 (iOS) 4.8/5.0
Coded a mechanism to migrate multiple code bases into a single one. Coordinated across different technical teams to ensure a smooth transition. Afterwards, the effort of coordination for these teams was drastically reduced.
Designed and built various services were created in Java using Spring Boot. Services for phone registration, contacts, message groups, messages and image stores were created. A websocket proxy was created to interface with the mobile clients (iOS and Android) and the services using Node on Express. A client side cache was created for limiting the required service calls.
Self hosted Wordpress instance, auto updates disabled so it won’t go down suddenly in production. Upgrading components can be fraught with dependency issues.
Introduced to the stack running a Flash app via a Chef deployed fleet on AWS. Chef (roles, environments, recipes, versioning), AWS (Instances, RDS, Scaling Groups, Run Scripts, Elastic IP (with failover), Route53), DNS with Dynect, caching with Fastly
5 person startup
Worked on Paypal Payment integration into the Platform
Used Spring (Everything) under the hood, git, hibernate, boot
Wrote a Google App for App Engine to record and track contractor's time per project. Intended use was for Rooster Park contractors. Got to an internal Alpha.
Worked on next generation RESTful Platform for ESPN with Mark Masse using Java and a proprietary framework for defining and creating Data Models and definitions. (WRML)
Remote office (ESPN headquarters is in Bristol, Connecticut)
Manager laid off within 6 months of project start
Scaled down the fleet from 50 servers down to 2, operating our 22 core services in each host. Simplified the service separation and areas of responsibility.
Instrumented key points to measure third party call times. Tuned caching with Admin Panel.
Wrote specifications for and implemented the Inventory service in Spring MVC, integration with 3P shipper. RESTish
Managed and maintained the PIM tool, written in PHP using the PHP framework. Added features.
Developed, managed and maintained a set of internal tools using Spring MVC.
Delivery of Decalz, Ultimate Wallpaper, Z-List, Redeem Codes and PayPal payments to users.
Integrated with Amazon Payments and PayPal for creating a service capable of receiving payments from customers for goods.
Wrote a “middle-man” service for routing service calls to appropriate clusters. Encapsulated critical information in cache using hazelcast, replication via IceStorm. Wrote several integration tests to verify functionality before deploying to Production.
Wrote a service capable of serving ~200 tps to customers bidding on goods. Used ICE Framework for Service Definition and Protocol, MySQL, JDBC, Memcached
Used Sass compiled with Ruby to create targeted CSS for Webstore. Cleaned up the DOM by making uniform rules for forms/images/etc.
Created quick and easy-to-operate tools for documenting user-facing portions of the site in bulk using Java Reflection, JAXB, and Perl CGI.
Re-engineered some Fraud services to be scalable with new services. Used JBPM workflows, Queueing Services, Hibernate, and WSDL to create inter-dependent services. Installed pre-run verification to prevent invalid state.
Added Google tracking for general page views and order events by creating new Pagelets with Java/JSP and corresponding Javascript.
Used Selenium and JMeter to simulate user traffic on a website. Created several log-parsing scripts in Perl to extract and interpret the data for each run.
Platform on Portlet specification, aggregation on the client side via Javascript. Created configurable reporting API via XML and SAX Parser in Java.
Optimized URL creation for best crawling as well as worked to reduce page weight for better keyword weight. Added metatags and layed work for site-maps.
Prototyped several merchant-configurable widgets in Java/JSP. Helped shape new platform considerations with usage of Java design patterns (command, factory, beans). Utilized Spring to externalize class configurations.
Used Mason and Perl to make a stand-alone widget for enabling self-service styling of customer-facing emails. Responsible for merchant-facing tool in Mason which communicated with MySQL for more general configuration options.
Created a robust CGI page that made a simple SOAP call for the user. Handled general GET URL format for easy bookmarking.
Created a Perl reporting script that used DBI to enter a Sybase DB and collect user statistics and billing information.
Composed Java utilities for cleaning language files. Traversed directories parsing files for keys and pairing them with values in other files.
I'm an avid soccer fan, and generally a fan of sport. I play indoor and occasionally outdoor. On the field, I'm a fan of the midfield where you need to keep your head on a swivel, and you have the opportunity to set someone up for making a play.
Sci-Fi and Fantasy books are my go-to reading genre for entertainment. I have read a little of many authors, such as Asimov, Arthur C. Clarke, Dan Simmons, Neal Stephenson, Robert Jordan, Terry Goodkind, Tolkien, Vernor Vinge, Heinlein... I can keep going. I'm very fond of the thought that these authors put into understanding how fragile the nature of our current reality is, and the ripple that simple things can make to change EVERYTHING.
I track news about Technology, Business and the Law. I believe that we are seeing a lot of turbulence as the impact of the internet is reaching deeper into our ability to communicate, process, and understand each other. I'm not sure what the future holds, but I hope that we can make the best informed decisions based on eventual outcomes, and not be focused on the short term gains.
Of course, check out my instagram. I love trying to capture a good photo!