Notes From a Full-Stack Developer

Kevin Raffay's WordPress Blog

NPM Firewall Issue

leave a comment »

I recently came across an issue when trying to install json-server, a REST API mock tool.  I am using this library temporarily so that I can focus on Angular 4 development, and it serves the purpose of getting a fake back-end running quickly until I get my Web API tier working.

When running “npm install json-server –save”, I got a bunch of npm errors:

npm ERR! Windows_NT 6.3.9600

npm ERR! argv "G:\\Program Files\\nodejs\\node.exe" "G:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "json-server" "-- save"

npm ERR! node v6.11.1 npm

ERR! npm v3.10.10 npm


Usually, this would lead me to StackOverflow, but the answer was buried in a node.js thread on github.  I was behind my company firewall, and fortunately I did not have to deal with some SSL certificate, which is what some of the answers on the internet alluded to.  All it took was one command:

npm config set strict-ssl false

h/t Yehor Sergeenko

Written by Kevin Raffay

August 2, 2017 at 5:49 pm

Posted in Uncategorized

Angular CLI and Karma Naming Issue

leave a comment »

I found another gotcha with setting up my Angular CLI project per the instructions at  because the Karma unit test config file references the old Angular naming convention.  Without going too deep into the issue, I had to  change all instances of ‘angular-cli’ to ‘@angular/cli’.  For example, the old naming had to be changed in frameworks and plugins section:

frameworks: [‘jasmine’, ‘angular-cli‘],
plugins: [

New format:

frameworks: [‘jasmine’, ‘@angular/cli’],
plugins: [

Written by Kevin Raffay

July 26, 2017 at 9:12 pm

Posted in Uncategorized

Setting up Angular 4 with Bootstrap

leave a comment »

I’ve been doing a lot of Angular development at my current contract, and there was a bit of a learning curve because of all the recent changes.  In the next few posts I’ll try to document the steps to get up and running and links to good resources that helped me get up and running.

I recommend starting with this Sitepoint post:

The Ultimate Angular CLI Reference Guide

 When you get to the point where you can see “app works!” in the browser, adding bootstrap support is not that intuitive.  First run this in the same folder that your angular-cli.json file is in:

 npm install --save bootstrap


You will then need to update the angular-cli.json file.  In the apps/styles node add a reference to the bootstrap css file:

"styles": ["../node_modules/bootstrap/dist/css/bootstrap.min.css","styles.css"],


To confirm that bootstrap is installed, run ng serve and you will see that your “app works!” text is no longer be rendered in a Times New Roman font, but in Arial.



Written by Kevin Raffay

July 18, 2017 at 6:58 pm

Posted in Uncategorized

How I Used SSIS to Save My Company a Ton of Money

leave a comment »

I recently implemented an SSIS solution to automate a complicated process that my company used to send out promotional emails.  The old process used to take a few days and involved several people – developers, DBAs, etc.  It was brutally tedious and complicated.  For those of you that don’t know, SSIS is Microsoft’s SQL-Server Integration Services, which used to be DTS (Data Transformation Services).

To summarize,each month  my company would send out a few million emails to former customers, trying to entice them to renew their memberships.  How these customers were selected was codified in a series of SQL scripts steeped in tribal knowledge and arcane business rules.  To top it off, each month these rules changed, based on the whims of a Vice President whose job it was to maintain the stream of recurring cash flow that kept the business running.  There was a constant effort to tweak the criteria and optimize the response rates of these emails, while not spamming or harassing former members who had no interest in renewing.

This process was ripe for automation, and SSIS beats ad-hoc SQL — most of the time.  Keep in mind that SSIS is just a tool, and as I always say: a fool with a tool is still a fool.  I would have to make sure that all this work would actually result in some ROI.  If my SSIS packages were brittle, unmaintainable, and cryptic, I would have replaced one faulty process with another, even more complicated one.

To top it off, once this data was collected, it would have to be approved by the VP, and if there was any doubt about the criteria, the whole process would be run again until the results were satisfactory to the powers that be.  For example, if last month’s run consisted of 2.2 million members, but this month’s run only had 2.19 million members, a junior developer would have to bird-dog this discrepancy and report a detailed explanation to the VP.  Did I mention that these issues had to be resolved in time to send the emails out on the first day of the month?  If the junior dev started the process at the end of the month and the numbers didn’t jive, guess what — she would be there late into the night of the last day of the month, running queries.

After seeing this junior developer data-diddling her career away (she dreamed of writing javascript and mobile apps, not running SQL scripts that took hours to complete), I decided to do something about this.  Over the course of a few weeks, I extracted all the arcane business rules embedded in thousands of lines of legacy SQL and codified them into a robust SSIS solution that Extracted the production data, Transformed it by applying the business rules used to identify valid data, and Loaded the data into CSV files — the classic ETL process.

My first challenge was to get away from running real-time queries against a production database to get the initial set of former members.  That process alone would take a developer hours to collect, validate, and eventually turn into CSV files that would be uploaded to our email provider.  I essentially designed a poor-man’s data warehouse, staging the production data into my sandbox using SSIS, and then having my SSIS packages call stored procedures to transfom it.  Oh, I know what you are saying — don’t you have a data warehouse or reporting team to do this?  Well, yes, but for reasons I can’t detail here, that team reported to a different manager and there were office politics involved.

Along the way, I learned a lot more about SSIS than I cared to (being a full-stack developer makes me hesitant to operate on one tier for too long) but it was a worthy effort.  What used to take days now takes hours — and that is just the time for the ETL process to run on the back-end.  It is not like some dev is sitting there, waiting for the results of a query to pop up in SSMS so that she can copy and paste them into Excel (yes, you heard correctly, she was taught to literally copy and paste the results into an Excel file and save that as a CSV).  I never understood why this was the process, and when I asked why, I was given the typical cargo cult response.  We always did it this way

For those that are interested, I summarized this project in a deck posted on Slideshare:


Written by Kevin Raffay

July 31, 2016 at 6:42 pm

Posted in Uncategorized

Basic CRUD Development With ORM/Telerik/WCF – Part 2 – ORM

leave a comment »

For our project, we can use the Lite version of LLBLGen Pro, which is limited to eight entities per project.

Here is a screenshot of the LLBLGen Pro project I used:


When working with the LLBLGenPro ORM, I found a need to generate not only the regular entities, but also objects called “Typed Views” to serve as DTOs (Data Transfer Objects).  Most ORM entities have extra metadata to manage state, such as “IsDeleted” properties.  This makes working with them powerful, but adds overhead to the payload in a Service Orientated Architecture (SOA).  I wanted to be able to use DTOs, or POCOs (Plain-Old-C#-Objects) as they are also known, between the service tier and the presentation layer.  This ORM allows me to generate these DTOs, saving me a lot of time.

The class diagrams below show the difference between a entity and a typed view:


Eventually, I want to serialize the typed views into lightweight JSON objects.  Serializing entities creates a complex JSON structure that does not work well for a REST-full application, as we shall see.

Written by Kevin Raffay

August 24, 2015 at 7:00 pm

Posted in CRUD, LLBLGen Pro

Basic CRUD Development With ORM/Telerik/WCF – Part 1 – Database

leave a comment »

The LPA (Landing Page Admin) database is pretty basic, with eight tables, but I will only use four for now.

Here is the database diagram:


Templates – Each Landing Page has a template, which contains metadata about the page.

GlobalTokens – Tokens stored as named-value pairs that apply to all templates.

TemplateTokens – Each template will inherit a set of global tokens.

Patterns – Patterns are the layouts that define the template (2-column, 3-column, etc.)

You can download the SQL script below to create these objects, but it may be easier to create a database called “LPA” and execute the part of the script that creates the objects.  There are also some other objects used by logging that are in the script.

 SQL Script on Dropbox

Create Database Script as Word Doc

Written by Kevin Raffay

August 16, 2015 at 2:14 am

Posted in CRUD, Microsoft

Tagged with ,

Basic CRUD Development With ORM/Telerik/WCF – Intro

leave a comment »

I recently finished up an engagement at a client where I had to build a fairly basic but robust application to manage landing pages.  Landing pages are are exactly what they sound like – pages that a web users “lands” on if they are searching for a site.

In this project, the company had over 50 landing pages, and all of them were distinct sets of static HTML with their own set of CSS/JS and images.  They were hosted on the web, and it was a nightmare to make to changes to all sets of landing pages because a developer would have to pull down the content, edit it, and the upload it back to the website.

One of the problems the company was having was that the pages weren’t entirely static.  Many had dynamic content that had to be “injected” when the pages were rendered.  Some of this data was content like interest rates, which change frequently.  In addition, this company had many different lines of business, and each had their own phone numbers, some of which changed frequently.

I will demonstrate in the next series of posts how I built a distributed ASP.NET MVC application to manage these landing pages.  The goal was to be able to tokenize the content that changed frequently, and use the power of MVC to render the landing pages with a mix of static and dynamic content.

None of this was rocket science, but there were some constraints.  Of course, it had to be a distributed SOA application, for scalability reasons, and besides, tightly coupled monolithic applications are just bad.  Also, the company had a very onerous build and release process, and standing up intranet sites was essentially impossible.  There was just too much push back from DevOps to let us setup up a project that required much ops support, even it was an internal app.  Also, it couldn’t require much environmental-specific configuration settings.

It just had to work, without ops getting involved.

Normally, for a project like this, I would create a website, setup a WCF service tier, and expect to host both on IIS.  But this was not your typical company 🙂

I also wasn’t going to get a dedicated DBA to write my stored procedures, and in any event, I didn’t want to use any.  This was a place where they still wrote tons of procs for basic CRUD applications, and expected the devs to write tedious ADO.NET code.

Really?  Hey – the 90’s called, they want their application development process back 🙂

So I had to, and wanted to, use an ORM.

And it wasn’t going to be Entity Framework.  EF is a nice, free ORM, but you get what you paid for.  I prefer a world class ORM, like LLBLGen Pro.

I also didn’t want to deal with a ton of WCF ServiceModel configurations – ops would not help when we had to change endpoints as we moved from DEV to QA to PROD.  So I explored some WCF self-hosting options that I scrounged up on the internet.  BTW, ServiceStack would have been great for a project like this, but an elegant, basically config-less,  endpoint-ignorant, and dependency-free service architecture would have been a bold and revolutionary concept at this company.

Finally, I wasn’t about to re-invent the UI wheel and handcraft my own client-side grids and write tedious javascript to input and render data.  I would use a 3-rd party component suite that was responsive and robust.  That would be Telerik Kendo UI.

I know, I hear you purists out there…


Yeah, right.  This is an internal CRUD app, not FACEBOOK or even GOOGLE.  I don’t need to write yet another JSON parser, logging component, tabular grid, or my own responsive CSS framework, service bus, or inversion of control container.  I make a living writing robust business applications quickly and efficiently, not indulging my hacker ethos.

Any way, I’ll go through the process in a few posts and upload the source to  GutHub.

Here’s a screenshot (I know, I told you it was basic):





Written by Kevin Raffay

August 16, 2015 at 1:16 am

Posted in Microsoft

Tagged with , , ,