Functional Languages in the Workplace

On a semi regular basis, people question why I choose to use F# to implement projects. They question why use a lesser known language when one like C# has a larger developer pool and is more widely documented. I explain to them my rational behind it, siting personal experience, and documented cases about others success stories as well. There is significant evidence showing functional languages can reduce commonly occurring defects due to their inherent nature of immutability provide easier support for scalability, and have a stronger type system allowing for more expressive code. There are numerous testimonials on the use of functional languages and their benefit, but after hearing all of this, they are still doubtful about even considering a change. Assuming this evidence is correct, the question of “Why isn’t this a serious choice for the majority of organizations?” continues to appear.
During discussions about switching to a functional language, I repeatedly hear the several common questions and arguments for resisting change. Most of these embody fear, uncertainty, and doubt. Several can be applied to moving to any technology, and although they should be considered, they are nothing which cannot be overcome. Here are my responses to the most common arguments against change I receive.

Our code is already written in language X, and it will be hard to make a change

There will always be legacy code, and it probably deviates from the standards used today. Was it written in a previous version of the currently used language? Does it contain libraries that are no longer supported? Was it written in such a way that converting it to current standards is difficult or impossible? If the answers these questions is yes, that doesn’t mean that other projects suffer the same fate.
Legacy code can’t hold you back from technological advancements, and it most likely doesn’t now. Over the last several years many software vendors have made sweeping changes to languages and technologies leaving them looking only vaguely like what they did when first created. The introduction of Generics, the inclusion of Lambda Expressions, and asynchronous additions made huge advancements in several different languages and greatly changed common approaches to solving problems. These enormous changes didn’t stop organizations from modernizing their many of their applications to take advantage of new features even though code written with them is radically different than in previously created applications.
Radical shifts in technology happen all the time, and almost every organization shifts its strategies based on trends in the industry. Organizations which defer changes to their current approach often find greater difficulty in migrating the longer they wait due to the fact that they continue to implement solutions using their current approach. Mindlessly shifting from one approach to another is never a wise decision. That introduces chaos, but neglecting trying new approaches due to legacy concerns can only end in repeating the same mistakes.

Our developers don’t know language Y. It will be too hard and costly for them to learn and migrate.

A developer’s job is to learn every day. There are new features to understand, new architecture patterns to master, and new languages to learn. The list is endless. The belief that at any stage in one’s career the road to deeper understanding ends, is myopic and ultimately an exit ramp to another profession or a stagnant career. Developer’s should be challenged. Organizations should push their staff to understand new things, and compared to the opportunity cost of repeating the same mistakes, the amount of time and money required to train people is often negligible, especially with tools like books, video learning, computer base training, etc.
There are some people that have no desire to continue learning, and that’s ok. New development isn’t for everyone, and going back to the previous point, there are always applications in need of support that won’t or can’t be converted. Organizational migration to a technology is almost never an all or nothing approach, and some solutions should be left exactly how they are, because of the cost of converting them will outweigh the benefits. There will be people to maintain those in the long term, and these solutions cannot be the bedrock against advancing how other projects progress.

What if we fail and we are stuck with a language we can’t use?

If an organization takes the leap of faith and switches to a functional language what is the probability of some failure during the process? The initial answer is, 100%. Everyone fails every day at something. Failure is inevitable. With this in mind, you’re already failing at something, so the question is what are you going to do to try and fix it? You’re going to create other problems too, but with planning, retrospective analysis, and learning from those mistakes, those will be solved as well, but ultimately the position you end at will be further along than where you started.
A few years ago, I had a discussion with an organization about their development practices. They were extremely adept at knowing where their time was allocated: support, feature enhancements, refactoring, etc. When asked about their breakdown, they explained on average 30% of their time went to fixing production defects from previous releases. They were perplexed about why they were missing deadlines despite becoming stringent on code quality. I asked about their plan to fix it, and they responded with a few ideas, but their final answer distilled to, “write better code.” When confronted with the question, “What are you going to change?” they said, “Nothing. Changing the development process is too time consuming and costly. If we update our practices, we’ll fall further behind on our releases.” The definition of insanity is doing the same thing and expecting a different result, yet several organizations believe they can break the cycle simply by standing still. If changing how an organization develops isn’t feasible, then changing what they develop with is one of the only few viable options remaining. It is much easier to change a technology than it is to change an ingrained culture, which is exactly why using languages and tools that enforce practices which reduce errors is a much more efficient approach than convincing everyone to work in a certain way.
Most organizations resistant to change perceive technology migrations as a revolutionary approach. They firmly believe all use of a certain technology immediately stops and the new one begins, because it is much easier to think in terms of black and white (one vs. the other) when change is a rare and uncomfortable occurrence. Change to anything new should be a cautious approach and take practice. It should be evolutionary. Organizations should try several smaller variations of an approach, learning from each and refining their ideas on gradually larger projects. Embracing a adaptation and “failure leads to a stronger recovery” approach ultimately leads to a better outcome.
It is almost certain moving from to a functional language from an unrelated paradigm is going to be difficult and confusing, but the fault does not lay to the language itself. As with anything new, the concepts are unfamiliar to those starting to use it. There will be mistakes during the learning process, and some projects will probably take longer than expected, but basing the long-term benefits on the first attempt to implement anything will show biased result against it, and with time moving to an approach which aids developers to make fewer mistakes and write better and cleaner code will save both time and money.

It’s not widely used enough for us to find people to support it

My coworker recently attended two meetups concerning functional programming, each having approximately 25 attendees. After the first one, he decided to do an experiment at the second. He asked people at the meetup, “How many of you use a functional language at work?” and the result was astounding. Only one person admitted to it, and it was only part time. At a minimum, there are 25 people at each location that are excited enough about functional programming to attend a meetup on their own time on a topic which has nothing to do with the tools they use at work, and these people are only a representation of the larger workforce. There are many others that were either unable to attend, or were unaware of the event.
There is almost no place in the United States that isn’t a competitive market for development staff. Large companies are able to pay higher rates and have better benefits which means they will pull the majority of the highest qualified candidates. Smaller organizations can’t offer the enormous benefits packages placing them in a difficult situation to fill needed positions. Picking a technology where there are fewer people to fill the role would seem to place those organizations at a disadvantage, but this is actuality in comparison to overall demand for those type of people. Looking solely at the number of potential applicants, the pool of functional programmers is smaller, but organizations using functional languages aren’t nearly as widespread, so they suffer less completion when searching for candidates. Furthermore, assuming the statistics surrounding the benefits of functional languages are correct, organizations will require fewer programmers accommodating the constraint of a smaller pool of applicants.

Conclusions

Functional languages can be an excellent fit for organizations, both ones starting development and others which have been established for a considerable length of time. Most resilience in using them comes from misunderstanding the benefits compared to the cost of changing languages. It is neither difficult nor time consuming to attempt to better the development process by focusing on tools to better aid the process.

C# 7 Additions – Tuples

In C# 7 Microsoft has introduced an updated Tuple type. It has a streamlined syntax compared to it’s predecessor making it fall it look more like F#. Instead of declaring it like previous versions, the new Tuple type looks like:

Likewise to declare it as a return type, the syntax is similar to declaring it:

The first thing to note about the new type is that it is not included automatically in a new project. If you immediately use it, you’ll see the following error.

As of VS 15 preview 4 (not to be confused with VS 2015), you must include the System.ValueTuple Nuget package to take advantage of it.

This raises the question about how the new Tuple type and the previous one included since .NET 4 are related? They’re not. They are treated as two different types and are not compatible with each other.  System.Tuple is a reference type and System. ValueTuple is a value type.

So what are advantages over the previous version? The syntax simpler, and there are several other advantages.

Named Properties

In the System.Tuple version, properties of the return object were referenced as Item1, Item2 etc. This gets confusing when there are multiples of the same type in the Tuple as you have to know what position had which value type.

Now it’s possible to explicitly name the item types to reduce confusion.

The Item properties (Item1, Item2, etc.) have also been included allowing methods to be updated to the new type without breaking code based on it’s predecessor.

It’s also possible to explicitly name the values when creating the object:

Deconstruction

It is now possible to name and assign variable values upon creating (or returning) a tuple. Although not necessary, it reduces the amount of code necessary to pull values out of the type.

It’s not certain if C# will get wildcards like F# to automatically discard values which aren’t needed. If they are allowed then it’s possible to only create a variable for the name like so:

Updating Values

System.Tuple is immutable.  Once created it’s not possible to update any of the values.  This restriction has been removed in the new version.  From a purely functional perspective this could be considered a step backwards, but in C# many people find this approach more forgiving and beneficial.

Like all value types, when it is passed into a method, a copy of the tuple is created, so modifying it in the in the method does not affect the original.

However if you compare two different tuples and they have the same values, the Equals method compares the values in each and if they are all equal, it considers them equal.

Integrations with Other Languages

Unfortunately, C#’s new tuple type doesn’t automatically allow it to translate tuples from F#.

F# can’t desconstruct the values like it can with it’s native tuples, and to return it, you have to explicitly instantiate the object type and add the values.

Either way, the translation to F# isn’t horrible as it acts like any other object passed to it by C#.

Minutes and Seconds

When dealing with dates in .NET, most applications use the System.DateTime struct to store and manipulate dates. It has a natural conversion from MSSQL Server’s DateTime datatype, and has little difficulty in translating into different data formats (JSON, XML, etc.)

For a while now JSON has been a standard in communicating between applications, web clients and servers, to and from database and so on, and serializing a DateTime object looks like this converting it into the ISO 8601 format:

and converting it back works exactly as expected:

Serializing the date and parsing it with JavaScript on some browsers (such as Chrome for Mac and Windows, and Safari) yields different results. The browser has moved the date 4 hours into the past, because JavaScript has assumed the time it parsed was in UTC.

Secondly, JavaScript based its interpretation off of an incorrect timezone assumption. In 1923, the majority of the United States stayed on standard time the entire year, and those that moved their clocks forward did so on April 29. Daylight Savings Time was not in effect, and throughout history, the use of it, and when it was applied, has varied. Even if the passed in date was meant to be in UTC, the conversion should have made it Mon March 19, 1923 19:00:00 GMT -0500 (EST).

This also means when serializing the date to JSON and sending it back to the server, it sends it as it’s assumed UTC date.

At first glance, this looks correct, but changing the date to 1923-05-20T00:00:00.000Z (adding the Z at the end, indicating the date is in UTC) yields 3/19/1923 7:00:00 PM showing that the .NET DateTime object is trying to apply the Time Zone Offset. Now the question is, “Why is .NET deserializing the date as 3/19/1923 7:00:00 PM and not 3/19/1923 8:00:00 PM like JavaScript does especially since the serialized date came from JavaScript after it interpreted the original date as UTC?” The answer is because JavaScript only interpreted the date as UTC and displayed it as the timezone the machine is set in. The date it had didn’t change, so when it serialized it to the ISO 8601 format, JavaScript only appended the Z thinking the date was already in UTC.

Sending the Time Zone Offset when serializing from .NET, fixes the inconsistencies in some instances, but not all of them.

Since the time zone wasn’t specified, .NET assumes it is the local time of the server and produces 2016-03-11T19:00:00-05:00. Loading that string into a JavaScript Date object produces the same results although in UTC:

One of the issues with creating DateTimes in .NET and not explicitly setting the timezone is that it can make incorrect assumptions. Setting the timezone to Indiana (East) and parsing the following date 1989-05-12T00:00:00.000Z yields 1989-05-11T20:00:00-04:00 showing the UTC conversion to Eastern Daylight Time. Indiana didn’t follow Daylight Savings Time until 2006, meaning the conversion is incorrect. (Even if it did do the conversion correctly, setting the Set Timezone Automatically flag in Windows 10 forces it to be Eastern Time)

Explicitly setting the timezone using the DateTimeOffset corrects the problem in .NET, but it doesn’t guarantee correctness across systems.

When the browser parses the date with the correct offset, it attempts to display the date with the offset it thinks it should be.

The data in the Date object is still correct, and exporting it will not alter it in any way. Its attempt to display the date in a local time is the issue. In the United States, the only dates that are affected are prior to 2007, because that is when the last time a change to the timezones occurred. (The start of Daylight Savings Time was moved ahead to the second Sunday in March, and then end extended to the first Sunday in November. If you load a date from the year 2004 into JavaScript in Chrome, the date will incorrectly show for all dates between March 14 and April 4.)

The real solution to handling dates is to use libraries specifically designed to handle timezones and Daylight Savings Time (MomentJs for JavaScript and NodaTime for .NET). These significantly reduce the inconsistencies found in using the built in date and time conversions found in .NET and in Browsers, but even these can have mistakes in them.

Pushing Data

Consider the following two pieces of code:

and

Although they look like they are roughly the same, they produce two very different results. The first returns an enumeration of string and the second throw’s an InvalidOperationException.

The difference stems from how C# handles the yield statement. Both methods promise to return an object which implements IEnumerable<string>. In .NET 1 and 1.1, there was no yield statement, and to return a custom object (not an array) which implemented it, you had to create an object which satisfied the IEnumerable requirements. (Remember it’s not generic, because it was created before generics were added.)

When it was added in .NET 2.0, in order to satisfy the requirements of returning an object, the C# compiler turns the method which uses yield into an object with the IEnumerable<T> interface.

(Entire generated class).

The important thing to note is that the generated class implements IDispoable, and it is responsible for disposing of the SqlConnection and SqlCommand, not the original method. This means that as the IEnumerable<string> object returns through the calling methods, nothing will dispose them until the enumeration object itself is disposed. The other method does not do this, because the SqlConnection is injected and the using statement is outside of the GetNamesWithConnection’s control (and hence it is not included in the generated class the GetNamesWithConnection converts to). Once the enumerator returns from the method responsible for disposing the SqlConnection (in the above example InjectSqlConnection), the using statement’s scope exits and the SqlConnection’s Dispose method fires.

How to fix the InjectSqlConnection method?

The easiest solution is to ensure that all operations performed on the IEnumerable object which it retrieves should occur before the using statement completes and SqlConnection.Dispose fires. In many scenarios this isn’t and issue, but in the following example this isn’t a possibility.

This method is from a class which inherits the API Controller where the method’s job isn’t to act upon the data, but return the it so the invoking method can serialize and send it out the response stream.

In this scenario, the first option would be to take the the SqlConnection out of the using statement, and let the Garbage Collector (eventually) dispose of it. This is a bad idea, and really isn’t a viable solution. Not disposing of SqlConnections will eventually use up all the available ones in the connection pool.

The second option would be to manually enumerate through the returned enumeration.
change this:

to this:

This is a possible solution, but it is a lot more work on the part of writing the code (since you would have to do this to each method returning data) and more work on the part of the system (keeping track multiple enumerators (both the original and the new), etc.). It’s a viable option, but it’s not ideal.

A third option would be to convert the returned enumeration to an object which doesn’t require a SqlConnection to operate such as an Array or a List.

This works, because the ToList() method creates a List object which implements IEnumerable and the ToList method loops through the contents of the enumeration adding each item to the list. Lists (and Arrays) exist purely in memory and is why the SqlConnectionyield return statement which only loads the record it needs at that moment (this is referred to as Lazy Loading).

Request.RegiserForDispose

The ApiController object has a property named Request which returns an object of type HttpRequestMessage. This object has a method named RegisterForDispose which returns void and takes in an IDisposable object. Any object passed into this method will be disposed once the current request finishes which is after the contents have been added to the response stream.

Encasing the SqlConnection in a using statement no longer becomes necessary to ensure it gets disposed. This allows the SqlConnection’s creation and registration to dispose to be abstracted into it’s own method. As long as the system retrieves all connections from this method, it will be impossible to leave a connection undisposed.

Wait Wait Don’t Tell Me…..On Second Thought

Consider the following:

Ideally, the example would start at 100 and loop until the counter hits zero, but the statement to decrement the counter is missing. It won’t ever stop running, and there is no visual cue to alert there is an issue. The most logical solution to this would be to add the PRINT statement.

Sadly, this does not solve the issue at hand as the PRINT statement only displays output sent to it after the script finishes executing. It’s great for knowing what occurred after it’s finished but provides no use for telling what is currently happening. Conveniently, there is a solution using the RAISERROR function.

The first parameter is the message to display, and the second and third are the error level and the system state. RAISERROR treats anything with an error level from 0-10 as a message and does not affect code execution. The system state ranges from -1 to 255 and anything outside of that results in an exception (and for the purposes of printing messages, this value is inconsequential). Using RAISERROR and adding the WITH NOWAIT option allows the immediate message output describing how the script is executing as it happens.

informational message

When the error level is raised above 10, Sql Server raises an exception:

Error

This works really well when executing a command from within a .NET application, but an issue arises when executing code from an external process like SqlCmd. When using the RAISERROR function and supplying a severity above 10, it doesn’t always trigger the process’s exit code (Process.ExitCode) to change to something other than 0 which alerts the calling application about an issue.

Exit code 0

Why use SQLCmd when .NET has a built in way of executing code with SqlCommand? The SqlCommand object does not allow the use of the batch seperator: GO. For applications doing Create/Read/Update/Delete (CRUD) operations, not using the GO statement isn’t a problem, but it is important to applications which perform maintenance on databases. Certain SQL execution statementslike CREATE PROCEDURE, must be the first item in a batch meaning chaining multiple statements together in one execution is impossible without using GO.

Fortunately, there is an easy solution to this problem. Change the error level to 20 – 25 or greater and add the WITH LOG option (values greater than 25 are interpreted as 25). This will close the connection to Sql Server and force applications like SqlCmd to return an ExitCode that is something other than 0. Unfortunately to set a severity level between 19-25, the account running the script must either be a member of the sysadmin role or have the Alter Trace permission.

Exit Code Not Zero

It is possible to look at the output and error messages and attempt to parse them and determine if an error occurred instead, but this is error prone and not the standard method for checking for system exceptions. It may work for applications specifically written for this scenario, but applications written to interact with others in the Windows ecosystem look at the exit code to determine process success or failure making setting it to the correct value vital when interacting with different systems.

Under The Mattress (or Compiled Code) is Not a Good Place to Hide Passwords

The question comes up from time to time about storing passwords in code, and is it secure. Ultimately, it’s probably a bad idea to store passwords in code strictly from a change management perspective, because you are most likely going to need to change it at some point in the future. Furthermore, passwords stored in compiled code are easy to retrieve if someone ever gets a hold of the assembly.

Using the following code as an example:

IL Spy Showing Password

So what about storing the password in some secure location and loading it into memory? This requires the attacker to take more steps to achieve the same goal, but it is still not impossible. In order to acquire the contents in memory (assuming the attacker can’t just attach a debugger to the running assembly), something will have to force the program to move the memory contents to a file for analysis.

Marc Russonivich wrote a utility called ProcDump which easily does the trick. Look for the name of the process (my process is named LookForPasswords) in task manager and run the following command:

This creates a file akin to LookForPasswords.exe_140802_095325.dmp and it contains all the memory information of the running process. To access the file contents you can either use Visual Studio or something like WinDbg.

WinDbg:
Open Dump File

After you open the dump file, you’ll need to load the SOS.dll to access information about the .NET runtime environment in Windbg.

LoadBySos

Once this is loaded, you can search the dump file for specific object types. So to get the statistics on strings (System.String):

String Statistics

This command will display a lot of information about the methods stored in the memory table, where the information lives in memory, etc. What you need to know is where the string data itself lives in memory. To access the statictics of a specific object

For example:

Show String List

Show Memory Address

Show Offset

In a string object, the actual data we want to get is located at the memory address plus the offset value (which is c). You can see this by accessing the specifics of the String object by inputting the following:

or in the example

Doing this for each string in the program would be rather tedious and time consuming considering most applications are significantly larger than the example application. WinDbg solves this issue, by having a .foreach command and this loops through all the string objects and prints out the contents.

Show all strings

To solve the issue of attacking the program by causing a memory dump, Microsoft added the System.Security.SecureString datatype in .NET 2.0. Although effective, it has some drawbacks to it, mainly that to effectively use it, you have to use pinned objects and doing this requires to check the unsafe flag in projects.

Unsafe Compile

Most organizations won’t allow unsafe code execution, so it makes using the SecureString pretty much pointless. With this in mind, the safest route to take for securing information is to not have it in memory at all. This removes the problem entirely. If it must reside in memory, then you can at least encrypt it while it’s stored there. This won’t solve every problem (if unsecured contents existed in memory, they still might), but it will at least reduce the possibility of it getting stolen.

The contents for the above example can be located on GitHub

Can I at Least Get Your Name Before We do This?

As applications grow in size and age, they can become difficult to navigate and unruly to manage. Most organizations give little thought to spending extra time to maintain existing code to a certain level of quality, and even less thought to spend money to rewrite or even refactor code to keep it maintainable. Fortunately, in .NET 4.5 Microsoft added a significant piece of functionality to at least help gather data on application flow. In the System.Runtime.CompilerServices namespace, 3 new attributes were added to help log calling function information.

  • CallerFilePathAttribute
  • CallerMemberNameAttribute
  • CallerLineNumberAttribute

Add the attributes to the appropriate parameters in a method to use them.

Console Output

The great part about this is that it will populate the information without modification to the calling methods’ source code. You can add parameters with these attributes to existing methods, and they will automatically populate. There are a couple of quirks though.

Optional Parameters

In order for the application to compile, you must mark these as optional parameters. After looking at how the information is resolved at compile time, this makes sense. If you notice in ILSpy, the attribute is added to the called method parameter, but the compiler generates the parameter values in the calling method. This means the compiler accessing the called method must be aware the attribute has been added to the parameter (meaning this feature will require a recompile of the assemblies calling these methods), and it must understand it needs to add the caller method information when invoking. An optional parameter is required to signify to the compiler that the source file is not missing parameter information (which would result in an error), and that the calling method expects the compiler to add the correct default values.

In addition to this, it’s a smart idea anyway, because not all .NET languages understand these attributes. For example, calling the C# method from F# doesn’t give the expected values. It inserts the default ones. F# doesn’t understand that when invoking this C# method, it should replace the optional parameter values with the calling ones.

F# Console Output

Namespaces

When using CallerMemberNameAttribute namespaces aren’t provided. This means that if you have methods with the same name in different classes or namespaces, there is no differentiation between them in the information passed to the called method. You can get around this by using the other two method parameters, which will give you the file of the calling method, and even the line where the called function was invoked, but without them there is a level of uncertainty.

The parameters can be overridden

If you explicitly set the parameter’s value, the calling method’s information won’t be sent.

Depending on the scenario, this can be both an advantage and a disadvantage. If you need to override it and pass in different information you can, but this also means you can’t add this attribute to an existing parameter and have it override what the calling method is already passing.

Reflection

Reflection, doesn’t work at all with this feature. Calling parameters are a compile time feature, and reflection is a runtime one. If you call the method with these attributes directly, the compiler won’t know to add them, so you’re on your own to supply the information. (Reflection with optional parameters will work by sending Type.Missing, but it won’t get you the caller values).

Speed

Actually, there is no slow down when using this, because it’s all resolved at compile time. Using ILSpy reveals there is nothing like reflection or other runtime resolutions involved. (This actually makes a lot of sense, because reflection doesn’t have information like the code file path or line numbers of the calling function, so it has to be resolved at compile time.):

The new feature is great, but it has it’s limitations. Considering that it’s possible to add the attributes to existing methods, and as long as the invoking methods are recompiled and the compiler is aware of how to insert the information, a wealth of information is available with very little work. All around, it’s a significant win for helping maintain applications.

You can all the project contents here.

Language Comparison Euler Problem One

Question:

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.

Find the sum of all the multiples of 3 or 5 below 1000.

F#

F# Example One

or

F# Example Two

C#

C# Example One

or

C# Example Two

JavaScript

JavaScript Example One

SQL

or

PowerShell

or

The Book Of F#

The Book of F#

The book comes out this month, by Dave Fancher F# MVP. It’s a great book! It has an easy to read narrative and a fun style, and is designed for .Net Object Oriented people who want an introduction to both functional programming and to learn F#. It’s a great reference book and learning tool. Even if you don’t want to program F#, the functional approach to programming will teach you a lot, with great examples and easy to read explanations. I can honestly say the book never gets old. (I was the technical reviewer and got to read each chapter about 4 times. Think about how many technical books you would read 4 times without contemplating putting a fork in your eye. Honestly, with his I didn’t, and the inside sci-fi references are fun too!)

On top of this, the introduction is awesome, and it explains exactly why you want to learn functional programming and F#.

The book can be found on the No Starch Press site.

Generating SQL

Generally, I don’t like writing code more than once if I can avoid it.  Most people apply the DRY principle to application code, but I also find it a good idea to follow this in writing SQL administration code.   Most applications today store a significant amount of administration level data in the database so the support team can quickly change the behavior of a system without having to redeploy the application.  Although beneficial, it does require someone to write SQL to code insert and update values in the database, and depending on the amount necessary can be time consuming, error prone, and tedious.

To handle tasks which require writing several (In my case hundreds) of SQL Insert statements at once, I created the following script.

Use it by creating an accompanying file like so:

The crux behind the script is the GenerateInsertStatement. It takes the table name as a string and then a sequence of Entry.

To handle multiple inserts all you need do is create a sequence of sequences:

Sometimes you need to insert data into a table with an identity column. This requires wrapping the statement in using the Identity Insert options. The generation script has a function which will wrap the sql code in an identity insert for the table. Use the AddIdentityInsertWrapper like so: