Functional Languages in the Workplace

On a semi regular basis, people question why I choose to use F# to implement projects. They question why use a lesser known language when one like C# has a larger developer pool and is more widely documented. I explain to them my rational behind it, siting personal experience, and documented cases about others success stories as well. There is significant evidence showing functional languages can reduce commonly occurring defects due to their inherent nature of immutability provide easier support for scalability, and have a stronger type system allowing for more expressive code. There are numerous testimonials on the use of functional languages and their benefit, but after hearing all of this, they are still doubtful about even considering a change. Assuming this evidence is correct, the question of “Why isn’t this a serious choice for the majority of organizations?” continues to appear.
During discussions about switching to a functional language, I repeatedly hear the several common questions and arguments for resisting change. Most of these embody fear, uncertainty, and doubt. Several can be applied to moving to any technology, and although they should be considered, they are nothing which cannot be overcome. Here are my responses to the most common arguments against change I receive.

Our code is already written in language X, and it will be hard to make a change

There will always be legacy code, and it probably deviates from the standards used today. Was it written in a previous version of the currently used language? Does it contain libraries that are no longer supported? Was it written in such a way that converting it to current standards is difficult or impossible? If the answers these questions is yes, that doesn’t mean that other projects suffer the same fate.
Legacy code can’t hold you back from technological advancements, and it most likely doesn’t now. Over the last several years many software vendors have made sweeping changes to languages and technologies leaving them looking only vaguely like what they did when first created. The introduction of Generics, the inclusion of Lambda Expressions, and asynchronous additions made huge advancements in several different languages and greatly changed common approaches to solving problems. These enormous changes didn’t stop organizations from modernizing their many of their applications to take advantage of new features even though code written with them is radically different than in previously created applications.
Radical shifts in technology happen all the time, and almost every organization shifts its strategies based on trends in the industry. Organizations which defer changes to their current approach often find greater difficulty in migrating the longer they wait due to the fact that they continue to implement solutions using their current approach. Mindlessly shifting from one approach to another is never a wise decision. That introduces chaos, but neglecting trying new approaches due to legacy concerns can only end in repeating the same mistakes.

Our developers don’t know language Y. It will be too hard and costly for them to learn and migrate.

A developer’s job is to learn every day. There are new features to understand, new architecture patterns to master, and new languages to learn. The list is endless. The belief that at any stage in one’s career the road to deeper understanding ends, is myopic and ultimately an exit ramp to another profession or a stagnant career. Developer’s should be challenged. Organizations should push their staff to understand new things, and compared to the opportunity cost of repeating the same mistakes, the amount of time and money required to train people is often negligible, especially with tools like books, video learning, computer base training, etc.
There are some people that have no desire to continue learning, and that’s ok. New development isn’t for everyone, and going back to the previous point, there are always applications in need of support that won’t or can’t be converted. Organizational migration to a technology is almost never an all or nothing approach, and some solutions should be left exactly how they are, because of the cost of converting them will outweigh the benefits. There will be people to maintain those in the long term, and these solutions cannot be the bedrock against advancing how other projects progress.

What if we fail and we are stuck with a language we can’t use?

If an organization takes the leap of faith and switches to a functional language what is the probability of some failure during the process? The initial answer is, 100%. Everyone fails every day at something. Failure is inevitable. With this in mind, you’re already failing at something, so the question is what are you going to do to try and fix it? You’re going to create other problems too, but with planning, retrospective analysis, and learning from those mistakes, those will be solved as well, but ultimately the position you end at will be further along than where you started.
A few years ago, I had a discussion with an organization about their development practices. They were extremely adept at knowing where their time was allocated: support, feature enhancements, refactoring, etc. When asked about their breakdown, they explained on average 30% of their time went to fixing production defects from previous releases. They were perplexed about why they were missing deadlines despite becoming stringent on code quality. I asked about their plan to fix it, and they responded with a few ideas, but their final answer distilled to, “write better code.” When confronted with the question, “What are you going to change?” they said, “Nothing. Changing the development process is too time consuming and costly. If we update our practices, we’ll fall further behind on our releases.” The definition of insanity is doing the same thing and expecting a different result, yet several organizations believe they can break the cycle simply by standing still. If changing how an organization develops isn’t feasible, then changing what they develop with is one of the only few viable options remaining. It is much easier to change a technology than it is to change an ingrained culture, which is exactly why using languages and tools that enforce practices which reduce errors is a much more efficient approach than convincing everyone to work in a certain way.
Most organizations resistant to change perceive technology migrations as a revolutionary approach. They firmly believe all use of a certain technology immediately stops and the new one begins, because it is much easier to think in terms of black and white (one vs. the other) when change is a rare and uncomfortable occurrence. Change to anything new should be a cautious approach and take practice. It should be evolutionary. Organizations should try several smaller variations of an approach, learning from each and refining their ideas on gradually larger projects. Embracing a adaptation and “failure leads to a stronger recovery” approach ultimately leads to a better outcome.
It is almost certain moving from to a functional language from an unrelated paradigm is going to be difficult and confusing, but the fault does not lay to the language itself. As with anything new, the concepts are unfamiliar to those starting to use it. There will be mistakes during the learning process, and some projects will probably take longer than expected, but basing the long-term benefits on the first attempt to implement anything will show biased result against it, and with time moving to an approach which aids developers to make fewer mistakes and write better and cleaner code will save both time and money.

It’s not widely used enough for us to find people to support it

My coworker recently attended two meetups concerning functional programming, each having approximately 25 attendees. After the first one, he decided to do an experiment at the second. He asked people at the meetup, “How many of you use a functional language at work?” and the result was astounding. Only one person admitted to it, and it was only part time. At a minimum, there are 25 people at each location that are excited enough about functional programming to attend a meetup on their own time on a topic which has nothing to do with the tools they use at work, and these people are only a representation of the larger workforce. There are many others that were either unable to attend, or were unaware of the event.
There is almost no place in the United States that isn’t a competitive market for development staff. Large companies are able to pay higher rates and have better benefits which means they will pull the majority of the highest qualified candidates. Smaller organizations can’t offer the enormous benefits packages placing them in a difficult situation to fill needed positions. Picking a technology where there are fewer people to fill the role would seem to place those organizations at a disadvantage, but this is actuality in comparison to overall demand for those type of people. Looking solely at the number of potential applicants, the pool of functional programmers is smaller, but organizations using functional languages aren’t nearly as widespread, so they suffer less completion when searching for candidates. Furthermore, assuming the statistics surrounding the benefits of functional languages are correct, organizations will require fewer programmers accommodating the constraint of a smaller pool of applicants.

Conclusions

Functional languages can be an excellent fit for organizations, both ones starting development and others which have been established for a considerable length of time. Most resilience in using them comes from misunderstanding the benefits compared to the cost of changing languages. It is neither difficult nor time consuming to attempt to better the development process by focusing on tools to better aid the process.

C# 7 Additions – Tuples

In C# 7 Microsoft has introduced an updated Tuple type. It has a streamlined syntax compared to it’s predecessor making it fall it look more like F#. Instead of declaring it like previous versions, the new Tuple type looks like:

Likewise to declare it as a return type, the syntax is similar to declaring it:

The first thing to note about the new type is that it is not included automatically in a new project. If you immediately use it, you’ll see the following error.

As of VS 15 preview 4 (not to be confused with VS 2015), you must include the System.ValueTuple Nuget package to take advantage of it.

This raises the question about how the new Tuple type and the previous one included since .NET 4 are related? They’re not. They are treated as two different types and are not compatible with each other.  System.Tuple is a reference type and System. ValueTuple is a value type.

So what are advantages over the previous version? The syntax simpler, and there are several other advantages.

Named Properties

In the System.Tuple version, properties of the return object were referenced as Item1, Item2 etc. This gets confusing when there are multiples of the same type in the Tuple as you have to know what position had which value type.

Now it’s possible to explicitly name the item types to reduce confusion.

The Item properties (Item1, Item2, etc.) have also been included allowing methods to be updated to the new type without breaking code based on it’s predecessor.

It’s also possible to explicitly name the values when creating the object:

Deconstruction

It is now possible to name and assign variable values upon creating (or returning) a tuple. Although not necessary, it reduces the amount of code necessary to pull values out of the type.

It’s not certain if C# will get wildcards like F# to automatically discard values which aren’t needed. If they are allowed then it’s possible to only create a variable for the name like so:

Updating Values

System.Tuple is immutable.  Once created it’s not possible to update any of the values.  This restriction has been removed in the new version.  From a purely functional perspective this could be considered a step backwards, but in C# many people find this approach more forgiving and beneficial.

Like all value types, when it is passed into a method, a copy of the tuple is created, so modifying it in the in the method does not affect the original.

However if you compare two different tuples and they have the same values, the Equals method compares the values in each and if they are all equal, it considers them equal.

Integrations with Other Languages

Unfortunately, C#’s new tuple type doesn’t automatically allow it to translate tuples from F#.

F# can’t desconstruct the values like it can with it’s native tuples, and to return it, you have to explicitly instantiate the object type and add the values.

Either way, the translation to F# isn’t horrible as it acts like any other object passed to it by C#.

Wait Wait Don’t Tell Me…..On Second Thought

Consider the following:

Ideally, the example would start at 100 and loop until the counter hits zero, but the statement to decrement the counter is missing. It won’t ever stop running, and there is no visual cue to alert there is an issue. The most logical solution to this would be to add the PRINT statement.

Sadly, this does not solve the issue at hand as the PRINT statement only displays output sent to it after the script finishes executing. It’s great for knowing what occurred after it’s finished but provides no use for telling what is currently happening. Conveniently, there is a solution using the RAISERROR function.

The first parameter is the message to display, and the second and third are the error level and the system state. RAISERROR treats anything with an error level from 0-10 as a message and does not affect code execution. The system state ranges from -1 to 255 and anything outside of that results in an exception (and for the purposes of printing messages, this value is inconsequential). Using RAISERROR and adding the WITH NOWAIT option allows the immediate message output describing how the script is executing as it happens.

informational message

When the error level is raised above 10, Sql Server raises an exception:

Error

This works really well when executing a command from within a .NET application, but an issue arises when executing code from an external process like SqlCmd. When using the RAISERROR function and supplying a severity above 10, it doesn’t always trigger the process’s exit code (Process.ExitCode) to change to something other than 0 which alerts the calling application about an issue.

Exit code 0

Why use SQLCmd when .NET has a built in way of executing code with SqlCommand? The SqlCommand object does not allow the use of the batch seperator: GO. For applications doing Create/Read/Update/Delete (CRUD) operations, not using the GO statement isn’t a problem, but it is important to applications which perform maintenance on databases. Certain SQL execution statementslike CREATE PROCEDURE, must be the first item in a batch meaning chaining multiple statements together in one execution is impossible without using GO.

Fortunately, there is an easy solution to this problem. Change the error level to 20 – 25 or greater and add the WITH LOG option (values greater than 25 are interpreted as 25). This will close the connection to Sql Server and force applications like SqlCmd to return an ExitCode that is something other than 0. Unfortunately to set a severity level between 19-25, the account running the script must either be a member of the sysadmin role or have the Alter Trace permission.

Exit Code Not Zero

It is possible to look at the output and error messages and attempt to parse them and determine if an error occurred instead, but this is error prone and not the standard method for checking for system exceptions. It may work for applications specifically written for this scenario, but applications written to interact with others in the Windows ecosystem look at the exit code to determine process success or failure making setting it to the correct value vital when interacting with different systems.

Can I at Least Get Your Name Before We do This?

As applications grow in size and age, they can become difficult to navigate and unruly to manage. Most organizations give little thought to spending extra time to maintain existing code to a certain level of quality, and even less thought to spend money to rewrite or even refactor code to keep it maintainable. Fortunately, in .NET 4.5 Microsoft added a significant piece of functionality to at least help gather data on application flow. In the System.Runtime.CompilerServices namespace, 3 new attributes were added to help log calling function information.

  • CallerFilePathAttribute
  • CallerMemberNameAttribute
  • CallerLineNumberAttribute

Add the attributes to the appropriate parameters in a method to use them.

Console Output

The great part about this is that it will populate the information without modification to the calling methods’ source code. You can add parameters with these attributes to existing methods, and they will automatically populate. There are a couple of quirks though.

Optional Parameters

In order for the application to compile, you must mark these as optional parameters. After looking at how the information is resolved at compile time, this makes sense. If you notice in ILSpy, the attribute is added to the called method parameter, but the compiler generates the parameter values in the calling method. This means the compiler accessing the called method must be aware the attribute has been added to the parameter (meaning this feature will require a recompile of the assemblies calling these methods), and it must understand it needs to add the caller method information when invoking. An optional parameter is required to signify to the compiler that the source file is not missing parameter information (which would result in an error), and that the calling method expects the compiler to add the correct default values.

In addition to this, it’s a smart idea anyway, because not all .NET languages understand these attributes. For example, calling the C# method from F# doesn’t give the expected values. It inserts the default ones. F# doesn’t understand that when invoking this C# method, it should replace the optional parameter values with the calling ones.

F# Console Output

Namespaces

When using CallerMemberNameAttribute namespaces aren’t provided. This means that if you have methods with the same name in different classes or namespaces, there is no differentiation between them in the information passed to the called method. You can get around this by using the other two method parameters, which will give you the file of the calling method, and even the line where the called function was invoked, but without them there is a level of uncertainty.

The parameters can be overridden

If you explicitly set the parameter’s value, the calling method’s information won’t be sent.

Depending on the scenario, this can be both an advantage and a disadvantage. If you need to override it and pass in different information you can, but this also means you can’t add this attribute to an existing parameter and have it override what the calling method is already passing.

Reflection

Reflection, doesn’t work at all with this feature. Calling parameters are a compile time feature, and reflection is a runtime one. If you call the method with these attributes directly, the compiler won’t know to add them, so you’re on your own to supply the information. (Reflection with optional parameters will work by sending Type.Missing, but it won’t get you the caller values).

Speed

Actually, there is no slow down when using this, because it’s all resolved at compile time. Using ILSpy reveals there is nothing like reflection or other runtime resolutions involved. (This actually makes a lot of sense, because reflection doesn’t have information like the code file path or line numbers of the calling function, so it has to be resolved at compile time.):

The new feature is great, but it has it’s limitations. Considering that it’s possible to add the attributes to existing methods, and as long as the invoking methods are recompiled and the compiler is aware of how to insert the information, a wealth of information is available with very little work. All around, it’s a significant win for helping maintain applications.

You can all the project contents here.

Language Comparison Euler Problem One

Question:

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.

Find the sum of all the multiples of 3 or 5 below 1000.

F#

F# Example One

or

F# Example Two

C#

C# Example One

or

C# Example Two

JavaScript

JavaScript Example One

SQL

or

PowerShell

or

The Book Of F#

The Book of F#

The book comes out this month, by Dave Fancher F# MVP. It’s a great book! It has an easy to read narrative and a fun style, and is designed for .Net Object Oriented people who want an introduction to both functional programming and to learn F#. It’s a great reference book and learning tool. Even if you don’t want to program F#, the functional approach to programming will teach you a lot, with great examples and easy to read explanations. I can honestly say the book never gets old. (I was the technical reviewer and got to read each chapter about 4 times. Think about how many technical books you would read 4 times without contemplating putting a fork in your eye. Honestly, with his I didn’t, and the inside sci-fi references are fun too!)

On top of this, the introduction is awesome, and it explains exactly why you want to learn functional programming and F#.

The book can be found on the No Starch Press site.