Notes From a Full-Stack Developer

Kevin Raffay's WordPress Blog

How I Used SSIS to Save My Company a Ton of Money

leave a comment »

I recently implemented an SSIS solution to automate a complicated process that my company used to send out promotional emails.  The old process used to take a few days and involved several people – developers, DBAs, etc.  It was brutally tedious and complicated.  For those of you that don’t know, SSIS is Microsoft’s SQL-Server Integration Services, which used to be DTS (Data Transformation Services).

To summarize,each month  my company would send out a few million emails to former customers, trying to entice them to renew their memberships.  How these customers were selected was codified in a series of SQL scripts steeped in tribal knowledge and arcane business rules.  To top it off, each month these rules changed, based on the whims of a Vice President whose job it was to maintain the stream of recurring cash flow that kept the business running.  There was a constant effort to tweak the criteria and optimize the response rates of these emails, while not spamming or harassing former members who had no interest in renewing.

This process was ripe for automation, and SSIS beats ad-hoc SQL — most of the time.  Keep in mind that SSIS is just a tool, and as I always say: a fool with a tool is still a fool.  I would have to make sure that all this work would actually result in some ROI.  If my SSIS packages were brittle, unmaintainable, and cryptic, I would have replaced one faulty process with another, even more complicated one.

To top it off, once this data was collected, it would have to be approved by the VP, and if there was any doubt about the criteria, the whole process would be run again until the results were satisfactory to the powers that be.  For example, if last month’s run consisted of 2.2 million members, but this month’s run only had 2.19 million members, a junior developer would have to bird-dog this discrepancy and report a detailed explanation to the VP.  Did I mention that these issues had to be resolved in time to send the emails out on the first day of the month?  If the junior dev started the process at the end of the month and the numbers didn’t jive, guess what — she would be there late into the night of the last day of the month, running queries.

After seeing this junior developer data-diddling her career away (she dreamed of writing javascript and mobile apps, not running SQL scripts that took hours to complete), I decided to do something about this.  Over the course of a few weeks, I extracted all the arcane business rules embedded in thousands of lines of legacy SQL and codified them into a robust SSIS solution that Extracted the production data, Transformed it by applying the business rules used to identify valid data, and Loaded the data into CSV files — the classic ETL process.

My first challenge was to get away from running real-time queries against a production database to get the initial set of former members.  That process alone would take a developer hours to collect, validate, and eventually turn into CSV files that would be uploaded to our email provider.  I essentially designed a poor-man’s data warehouse, staging the production data into my sandbox using SSIS, and then having my SSIS packages call stored procedures to transfom it.  Oh, I know what you are saying — don’t you have a data warehouse or reporting team to do this?  Well, yes, but for reasons I can’t detail here, that team reported to a different manager and there were office politics involved.

Along the way, I learned a lot more about SSIS than I cared to (being a full-stack developer makes me hesitant to operate on one tier for too long) but it was a worthy effort.  What used to take days now takes hours — and that is just the time for the ETL process to run on the back-end.  It is not like some dev is sitting there, waiting for the results of a query to pop up in SSMS so that she can copy and paste them into Excel (yes, you heard correctly, she was taught to literally copy and paste the results into an Excel file and save that as a CSV).  I never understood why this was the process, and when I asked why, I was given the typical cargo cult response.  We always did it this way

For those that are interested, I summarized this project in a deck posted on Slideshare:

 

Written by Kevin Raffay

July 31, 2016 at 6:42 pm

Posted in Uncategorized

Basic CRUD Development With ORM/Telerik/WCF – Part 2 – ORM

leave a comment »

For our project, we can use the Lite version of LLBLGen Pro, which is limited to eight entities per project.

Here is a screenshot of the LLBLGen Pro project I used:

LPAGenPro

When working with the LLBLGenPro ORM, I found a need to generate not only the regular entities, but also objects called “Typed Views” to serve as DTOs (Data Transfer Objects).  Most ORM entities have extra metadata to manage state, such as “IsDeleted” properties.  This makes working with them powerful, but adds overhead to the payload in a Service Orientated Architecture (SOA).  I wanted to be able to use DTOs, or POCOs (Plain-Old-C#-Objects) as they are also known, between the service tier and the presentation layer.  This ORM allows me to generate these DTOs, saving me a lot of time.

The class diagrams below show the difference between a entity and a typed view:

GenProClasses

Eventually, I want to serialize the typed views into lightweight JSON objects.  Serializing entities creates a complex JSON structure that does not work well for a REST-full application, as we shall see.

Written by Kevin Raffay

August 24, 2015 at 7:00 pm

Posted in CRUD, LLBLGen Pro

Basic CRUD Development With ORM/Telerik/WCF – Part 1 – Database

leave a comment »

The LPA (Landing Page Admin) database is pretty basic, with eight tables, but I will only use four for now.

Here is the database diagram:

LPA_DB

Templates – Each Landing Page has a template, which contains metadata about the page.

GlobalTokens – Tokens stored as named-value pairs that apply to all templates.

TemplateTokens – Each template will inherit a set of global tokens.

Patterns – Patterns are the layouts that define the template (2-column, 3-column, etc.)

You can download the SQL script below to create these objects, but it may be easier to create a database called “LPA” and execute the part of the script that creates the objects.  There are also some other objects used by logging that are in the script.

 SQL Script on Dropbox

Create Database Script as Word Doc

Written by Kevin Raffay

August 16, 2015 at 2:14 am

Posted in CRUD, Microsoft

Tagged with ,

Basic CRUD Development With ORM/Telerik/WCF – Intro

leave a comment »

I recently finished up an engagement at a client where I had to build a fairly basic but robust application to manage landing pages.  Landing pages are are exactly what they sound like – pages that a web users “lands” on if they are searching for a site.

In this project, the company had over 50 landing pages, and all of them were distinct sets of static HTML with their own set of CSS/JS and images.  They were hosted on the web, and it was a nightmare to make to changes to all sets of landing pages because a developer would have to pull down the content, edit it, and the upload it back to the website.

One of the problems the company was having was that the pages weren’t entirely static.  Many had dynamic content that had to be “injected” when the pages were rendered.  Some of this data was content like interest rates, which change frequently.  In addition, this company had many different lines of business, and each had their own phone numbers, some of which changed frequently.

I will demonstrate in the next series of posts how I built a distributed ASP.NET MVC application to manage these landing pages.  The goal was to be able to tokenize the content that changed frequently, and use the power of MVC to render the landing pages with a mix of static and dynamic content.

None of this was rocket science, but there were some constraints.  Of course, it had to be a distributed SOA application, for scalability reasons, and besides, tightly coupled monolithic applications are just bad.  Also, the company had a very onerous build and release process, and standing up intranet sites was essentially impossible.  There was just too much push back from DevOps to let us setup up a project that required much ops support, even it was an internal app.  Also, it couldn’t require much environmental-specific configuration settings.

It just had to work, without ops getting involved.

Normally, for a project like this, I would create a website, setup a WCF service tier, and expect to host both on IIS.  But this was not your typical company🙂

I also wasn’t going to get a dedicated DBA to write my stored procedures, and in any event, I didn’t want to use any.  This was a place where they still wrote tons of procs for basic CRUD applications, and expected the devs to write tedious ADO.NET code.

Really?  Hey – the 90’s called, they want their application development process back🙂

So I had to, and wanted to, use an ORM.

And it wasn’t going to be Entity Framework.  EF is a nice, free ORM, but you get what you paid for.  I prefer a world class ORM, like LLBLGen Pro.

I also didn’t want to deal with a ton of WCF ServiceModel configurations – ops would not help when we had to change endpoints as we moved from DEV to QA to PROD.  So I explored some WCF self-hosting options that I scrounged up on the internet.  BTW, ServiceStack would have been great for a project like this, but an elegant, basically config-less,  endpoint-ignorant, and dependency-free service architecture would have been a bold and revolutionary concept at this company.

Finally, I wasn’t about to re-invent the UI wheel and handcraft my own client-side grids and write tedious javascript to input and render data.  I would use a 3-rd party component suite that was responsive and robust.  That would be Telerik Kendo UI.

I know, I hear you purists out there…

“WHAT IS THIS SORCERY YOU SPEAK OF?  BLASPHEMY!  THOU SHALL WRITE EVERY PIECE OF CODE IN AN APPLICATION!  NO EXTERNAL DEPENDENCIES!  NO ORMS!  NO 3-RD PARTY COMPONENTS!  NOT EVEN F-ING JQUERY!”

Yeah, right.  This is an internal CRUD app, not FACEBOOK or even GOOGLE.  I don’t need to write yet another JSON parser, logging component, tabular grid, or my own responsive CSS framework, service bus, or inversion of control container.  I make a living writing robust business applications quickly and efficiently, not indulging my hacker ethos.

Any way, I’ll go through the process in a few posts and upload the source to  GutHub.

Here’s a screenshot (I know, I told you it was basic):

LPA1

 

 

 

Written by Kevin Raffay

August 16, 2015 at 1:16 am

Posted in Microsoft

Tagged with , , ,

Slashdot: What Portion of Developers Are Bad At What They Do?

leave a comment »

– Link

I would say that at any given time, 100%.  What developer has not been horrible at some new technology that they were trying to get their head around?  I look at some of my first jQuery or angular.js code snippets, and they were bad

But the key question is what portion of developers stop sucking at what they do?  I would like to think most improve, or abandon the technology they are bad at.  For instance, I suck at Sharepoint.  I am bad at it.  So I don’t do it.  I turn down positions involving Sharepoint.

However, the OP at /. is appalled that some of the people he interview are clueless about public/private key encryption.  But does that make them bad developers?  Or could they be awesome client-side ninjas, hate dealing with security protocols, and try to leverage oAuth as much as possible without knowing the implementation details?

I would have to agree that if you don’t have a general concept about public/private keys, that may be a clue to dig deeper into your overall level of competence, especially if you are pitching yourself for an architect role.

BTW, some great architects are horrible developers😉

Written by Kevin Raffay

February 14, 2015 at 6:04 pm

Posted in Uncategorized

The 25 costliest tech screw-ups of all time

leave a comment »

All organizations depend on IT to keep operations up and running. That means tech mistakes – even seemingly minor ones – can have a huge impact on the organization and its bottom line.

 

http://www.itmanagerdaily.com/the-25-costliest-tech-screw-ups-of-all-time/

Written by Kevin Raffay

July 2, 2013 at 9:45 pm

Posted in Uncategorized

My Favorite Things To Do In Huntington Beach

leave a comment »

1. Sunday brunch at the Black Bull: come hungry, come often

2. Tuesday night downtown street fair: always something new

3. Dinner at Cucina Alessa: at least twice a month, and you’ll probably run into the HB mayor, Joe Carchio.

4. Karaoke at Killarney’s on Monday nights: OK, I only did this once, and found it surprisingly fun to watch.  Watch, not sing🙂

5. Any day at the Doggy Beach.

Written by Kevin Raffay

January 21, 2012 at 10:53 pm

Posted in Uncategorized