Always Answer the Phone, Respond to the Email, and Go to the Interview

Several years ago I worked with someone who really felt he was happy at his job. Was he? I don’t know. He never mentioned not liking where we worked, but at the same time he never gave himself the opportunity to find something new. Each time a recruiter called about new employment opportunities, he blew them off and actively convinced them to not call again. Is this a problem? Outside of possibly being impolite, no, but it is limiting. By never inquiring about other opportunities, he never knew what he missed.

On my team, we have a rule: Always answer the phone, respond to the email, and go to the interview. It’s simple, if someone calls regarding a job opportunity, you must at least listen to their proposal. Upon hearing this, people surmise it is to remove under performers, or people who don’t mesh with the team. This isn’t the case. There are many reasons we have this rule, but they distill to two important goals.

  1. It forces the team to acknowledge anyone could leave at any time. Admittedly, this is self serving. Having one person with all the knowledge on a particular subject is an easy path towards failure. This scenario simply cannot exist in the long term.
  2. The more important reason is: We want our team members to succeed. Unfortunately, this doesn’t necessarily mean they should stay. Another position may allow them to grow with new responsibilities, it may allow them to learn new technologies, or it might be a better fit for them personally. By making it explicitly clear that we expect our team members to talk to recruiters and interview at other companies, everyone knows their best interests are placed above the necessities of the team. Everyone knows, this is just a job that might end tomorrow, and they also know your life is your life, and it might end tomorrow.

Even after explaining this, I get pushback. People tell me they are happy, they don’t want to take the effort interviewing, or risk what they have by changing jobs. I always respond to them with three points.

You don’t know what you don’t know

This is partially true for all industries, but it is especially pertinent for the tech industry. Things change all the time. Standards, applications, methodologies, approaches, and several other aspects constantly appear, influence, and fade. Your career is your career and no one else’s. Without constantly watching what is happening, you have no idea if your skills are relevant. The organization you work for does not have your best interests in mind. (Nor should it.) The people in charge can and must make decisions towards what is best for the organization, and it may not coincide with what’s best for you. Unless you actively take notice and explore other opportunities, you will never know where you stand in relation to everyone else.

You never know when you might need to change jobs in a hurry

My first job out of college, I worked at the same place for about 2 years when a friend called with shocking news, “Something is wrong. Your company has been delisted from the stock market.” Four hours later, everyone gathered in the cafeteria and management announced the company declared bankruptcy. Outside of a few people in accounting and the executive staff, no one knew there was a 50 million dollar debt payment due, and we couldn’t secure a line of credit to cover it. We were all dumfounded. We spent the last 6 months significantly reducing expenses and increasing income through new lines of business. There was nothing that could have prevented this, and outside of the executive staff, no one knew this was going to happen. One week later, over half the company was laid off while the rest stayed to handle the continuing business until they could sell the assets of the company.

Some people found jobs quickly, but others weren’t so lucky, because several of the development groups used waining technologies. Most of the staff never imagined this happening, and although they were decent at their jobs, companies didn’t want to spend time or money training new employees. It was unfortunate but true, and had the developers evaluated what they would need to find a new job, they might have had an easier time. What everyone learned was: Even if the job market is favorable to your occupation, it still takes time to make contacts, research potential opportunities, and move through the hiring process.

With this in mind would some of the people changed jobs earlier if they even remotely thought this was a possibility? I can say unequivocally, yes. There were several people that didn’t like working there at all, but they stayed, because they felt nothing was better than what they had. It’s much easier to be unhappy than it is to be confronted with the unknown.

Are you happy or are you comfortable

This is an important distinction. Most people confuse being comfortable with being happy. Happy is being able to say no to a better offer. Comfortable is being afraid that something better might come along forcing you to make a decision that might be unpleasant. Being comfortable is an illusion. It is a way of deceiving yourself so you don’t find another opportunity. Why is this true? It is true, because the fear of uncertainty is much more uncomfortable than the current unhappiness.

Jim Carrey during a commencement speech told a story about his dad and how he took a job that he didn’t want, because he thought it was the safe choice. The story ends with his dad losing his job, and Carrey said he learned many things from his dad, but especially, “You can fail at what you don’t want, so you might as well take a chance on doing what you love.” The perfect job comes along once in a great while, and it is different for everyone. If you don’t actively pursue it, you’ll never know it’s there.

I have never worked at an organization where my job was completely secure for an extended period of time. I have worked for large companies and small ones. I have worked for startups and established organizations. You may not know it, but no matter where you work, something is always in flux. Whether it’s a major client that is considering leaving, a lack of proper accounting or even embezzlement, there is always something that could cause your position to not be there in the morning.

Things you Should do with Strings While Your Coworkers are on Holiday and No One is Checking the Production Code Branch

Hi everyone! This is part of the really cool new CS Advent Calendar run by Matthew Groves! Go check out all the really great articles by everyone!

In a non infrequent basis, interviewers ask the question, “What is a string?” and they are looking for a quick answer similar to, “It is an immutable reference type.” This normally, sparks follow up questions such as, “explain what immutable means in this scenario,” or “so are there any examples where you can change a string?” The most common answers is, “No,” and with good reason. Adding two Strings together creates a new third String. Calling methods like ToUpper() doesn’t modify the one being operated on. It creates a new string, and although strings can be treated like an array of characters, the compiler prevents the modification of those characters in their specific positions.

Technically, the more correct answer is, “It depends.” Under most circumstances, it is not possible by design, and rightfully so. There are several factors dealing with efficiency and predictability that rely on this fundamental idea, but this doesn’t encompass the “allow unsafe code” compiler option. This is in a sense cheating, as it goes against established ideas of how most .NET applications work, but with this, it is possible to mutate a string using the fixed statement, and exploring it exposes some interesting behaviors of the .NET runtime.

To elucidate this, I created an assembly project and a unit test project to show various scenarios using the fixed statement and what happens. In these examples, the unit tests don’t actually test for validity. They merely bootstrap the test methods and print the results.

So what is happening with this code? The first necessity is to understand what the fixed statement does. According to the C# Language Reference:

The fixed statement sets a pointer to a managed variable and “pins” that variable during the execution of the statement. Without fixed, pointers to movable managed variables would be of little use since garbage collection could relocate the variables unpredictably. The C# compiler only lets you assign a pointer to a managed variable in a fixed statement.

With the fixed statement, it is possible to change a string in place which breaks its concept of immutability. The unit test:

  • prints the public readonly string “Bah Humbug!!!!!”
  • runs the method which alters that string
  • prints the same string which is now “Happy Holidays!”
  • show output of unit test.

    Now what happens when a local string is modified that is the exact same as the class level string?

    At first glance, the local string (localSeasonsGreetings) should be modified, and the class level string (SeasonsGreetings) should be unchanged.

    In this example, the unit test runs the method which prints out the values of the local string and the class level string, and then the unit test prints out the value of the class level string.

    copy of string results

    The local string is modified, and the class level string is also changed. Why did this happen? The answer lies in String Interning. When a literal string becomes accessible by the program, it is checked against the intern pool (a table which houses a unique instance of each literal string or ones that have been programmatically added). If the literal already exists within that table, a reference to the string in the table is returned instead of creating a new instance. Since the two string entries in the example are the same (Bah Humbug!!!!!), the runtime actually creates one reference for both of them, and hence, when one is modified, the other is affected.

    So what happens if we piece together the string at runtime from two constants?

    Notice in the example code above, the localSeasonsGreetings literal is changed to:

    mutate pieced together copy of string.  Shows different results.

    Since the local variable instance of Bah Humbug!!!!! was created when the method was run (and is not a literal), the CLR created a new instance of this string. When this local instance was modified, the class level variable instance was not differing from the previous example.

    What happens when the same string value is in different assemblies?

    Mutate local copy of bah humbug value in test assembly.  Is modified.

    Based on the previous examples, it works how you would expect it to. Since String Interning is controlled by the CLR and not during compile time, which assembly the string is located in doesn’t matter. All literals loaded into memory are added to the same pool, so modifying the value in one assembly affects all other instances in the entire application.

    Up until this point, we’ve only seen the effects of String Interning on instances of a string. What happens if we return a literal from a static method? To test this, I added a method to return “Bah Humbug!!!!!” to the ImmutableStringsExample.

    static method single call - no change

    The static method was called after the modification method ran, and it did not change. We could assume that since the method creates a new string instance, and the static method after we modified the interned “Bah Humbug!!!!!” string reference that it couldn’t find it and created a new instance. Now the question is, “Is this method deterministic?” Will this method always return a new instance of “Bah Humbug!!!!!!”?

    call static method twice change

    Clearly the answer is no. The time when the application calls the static method, determines its behavior. Now what happens with a non-static method? Are the same methods in different objects the same?

    Second instantiated object - different

    Non static methods work the same as static ones in this regard. Once ran, the CLR will make updates and return a reference to the same object.

    With the above examples, we see that Strings in .NET are really a lot more complicated than they initially let on. The runtime handles a lot of complicated optimizations, and there is a lot of work that goes on behind the scenes to ensure that efficiency. With those efficiencies come certain restrictions, such as immutability, but in the whole scope, those small restrictions can be managed and used to benefit the application.

    The code for this post can be found on GitHub

    Are you Null?

    Within the last couple of days Microsoft released a proposed update for the next major release of C# version 8.  Over the past several years, there has been a large debate on the existence and use of null in software development.  Allowing null has been heralded as the billion dollar mistake by the null reference inventor, Sir Tony Hare. With this, Microsoft has decided to help the C# community by adding functionality to the C# compiler to help point out where a null reference might occur.

    With the release of C# 8, anything referencing an object (string, etc.) must explicitly declare itself as possibly being null, and if that variable isn’t explicitly checked before being used, the compiler generates a warning that a possible null reference might occur. So how does this work? By using the ? at the end of a reference type, it signifies the developer acknowledges null might occur.

    This looks like it would be a breaking change, and all code written in a previous version will suddenly stop compiling. This would be true except for two things.

    1. You must use a compiler flag to enforce the rule.
    2. The flag will only generate warnings not errors.

    So legacy code is safe in the upgrade process if it’s too difficult to convert.

    With this, they are still working out a number of scenarios that prove tricky to handle. These are things like default array initialization (new string[2]). Their comments about all of these can be found on their blog on MSDN

    I’ve added their code examples below of edge cases they are still working on:

    Personally, I hoped the compiler would enforce these rules a little stronger. Some languages like F# strictly enforce variable immutability unless explicitly allowed, and other functional languages do not allow it at all.

    It is possible to turn on “Warnings as errors” and have the compiler stop if it encounters a possible null exception, but this assumes the rest of the code has no other warnings that won’t stop compilation. Ideally, no warning flags should ever appear in code without being fixed, but that is a very difficult standard follow when dealing with legacy code from years past where no one followed that rule before you. Either way, the C# team was in a tight situation, and they did the best they could. They needed to make strides towards making null references easier to track, but they couldn’t break all of the legacy code using previous versions of C#.

    Functional Languages in the Workplace

    On a semi regular basis, people question why I choose to use F# to implement projects. They question why use a lesser known language when one like C# has a larger developer pool and is more widely documented. I explain to them my rational behind it, siting personal experience, and documented cases about others success stories as well. There is significant evidence showing functional languages can reduce commonly occurring defects due to their inherent nature of immutability provide easier support for scalability, and have a stronger type system allowing for more expressive code. There are numerous testimonials on the use of functional languages and their benefit, but after hearing all of this, they are still doubtful about even considering a change. Assuming this evidence is correct, the question of “Why isn’t this a serious choice for the majority of organizations?” continues to appear.
    During discussions about switching to a functional language, I repeatedly hear the several common questions and arguments for resisting change. Most of these embody fear, uncertainty, and doubt. Several can be applied to moving to any technology, and although they should be considered, they are nothing which cannot be overcome. Here are my responses to the most common arguments against change I receive.

    Our code is already written in language X, and it will be hard to make a change

    There will always be legacy code, and it probably deviates from the standards used today. Was it written in a previous version of the currently used language? Does it contain libraries that are no longer supported? Was it written in such a way that converting it to current standards is difficult or impossible? If the answers these questions is yes, that doesn’t mean that other projects suffer the same fate.
    Legacy code can’t hold you back from technological advancements, and it most likely doesn’t now. Over the last several years many software vendors have made sweeping changes to languages and technologies leaving them looking only vaguely like what they did when first created. The introduction of Generics, the inclusion of Lambda Expressions, and asynchronous additions made huge advancements in several different languages and greatly changed common approaches to solving problems. These enormous changes didn’t stop organizations from modernizing their many of their applications to take advantage of new features even though code written with them is radically different than in previously created applications.
    Radical shifts in technology happen all the time, and almost every organization shifts its strategies based on trends in the industry. Organizations which defer changes to their current approach often find greater difficulty in migrating the longer they wait due to the fact that they continue to implement solutions using their current approach. Mindlessly shifting from one approach to another is never a wise decision. That introduces chaos, but neglecting trying new approaches due to legacy concerns can only end in repeating the same mistakes.

    Our developers don’t know language Y. It will be too hard and costly for them to learn and migrate.

    A developer’s job is to learn every day. There are new features to understand, new architecture patterns to master, and new languages to learn. The list is endless. The belief that at any stage in one’s career the road to deeper understanding ends, is myopic and ultimately an exit ramp to another profession or a stagnant career. Developer’s should be challenged. Organizations should push their staff to understand new things, and compared to the opportunity cost of repeating the same mistakes, the amount of time and money required to train people is often negligible, especially with tools like books, video learning, computer base training, etc.
    There are some people that have no desire to continue learning, and that’s ok. New development isn’t for everyone, and going back to the previous point, there are always applications in need of support that won’t or can’t be converted. Organizational migration to a technology is almost never an all or nothing approach, and some solutions should be left exactly how they are, because of the cost of converting them will outweigh the benefits. There will be people to maintain those in the long term, and these solutions cannot be the bedrock against advancing how other projects progress.

    What if we fail and we are stuck with a language we can’t use?

    If an organization takes the leap of faith and switches to a functional language what is the probability of some failure during the process? The initial answer is, 100%. Everyone fails every day at something. Failure is inevitable. With this in mind, you’re already failing at something, so the question is what are you going to do to try and fix it? You’re going to create other problems too, but with planning, retrospective analysis, and learning from those mistakes, those will be solved as well, but ultimately the position you end at will be further along than where you started.
    A few years ago, I had a discussion with an organization about their development practices. They were extremely adept at knowing where their time was allocated: support, feature enhancements, refactoring, etc. When asked about their breakdown, they explained on average 30% of their time went to fixing production defects from previous releases. They were perplexed about why they were missing deadlines despite becoming stringent on code quality. I asked about their plan to fix it, and they responded with a few ideas, but their final answer distilled to, “write better code.” When confronted with the question, “What are you going to change?” they said, “Nothing. Changing the development process is too time consuming and costly. If we update our practices, we’ll fall further behind on our releases.” The definition of insanity is doing the same thing and expecting a different result, yet several organizations believe they can break the cycle simply by standing still. If changing how an organization develops isn’t feasible, then changing what they develop with is one of the only few viable options remaining. It is much easier to change a technology than it is to change an ingrained culture, which is exactly why using languages and tools that enforce practices which reduce errors is a much more efficient approach than convincing everyone to work in a certain way.
    Most organizations resistant to change perceive technology migrations as a revolutionary approach. They firmly believe all use of a certain technology immediately stops and the new one begins, because it is much easier to think in terms of black and white (one vs. the other) when change is a rare and uncomfortable occurrence. Change to anything new should be a cautious approach and take practice. It should be evolutionary. Organizations should try several smaller variations of an approach, learning from each and refining their ideas on gradually larger projects. Embracing a adaptation and “failure leads to a stronger recovery” approach ultimately leads to a better outcome.
    It is almost certain moving from to a functional language from an unrelated paradigm is going to be difficult and confusing, but the fault does not lay to the language itself. As with anything new, the concepts are unfamiliar to those starting to use it. There will be mistakes during the learning process, and some projects will probably take longer than expected, but basing the long-term benefits on the first attempt to implement anything will show biased result against it, and with time moving to an approach which aids developers to make fewer mistakes and write better and cleaner code will save both time and money.

    It’s not widely used enough for us to find people to support it

    My coworker recently attended two meetups concerning functional programming, each having approximately 25 attendees. After the first one, he decided to do an experiment at the second. He asked people at the meetup, “How many of you use a functional language at work?” and the result was astounding. Only one person admitted to it, and it was only part time. At a minimum, there are 25 people at each location that are excited enough about functional programming to attend a meetup on their own time on a topic which has nothing to do with the tools they use at work, and these people are only a representation of the larger workforce. There are many others that were either unable to attend, or were unaware of the event.
    There is almost no place in the United States that isn’t a competitive market for development staff. Large companies are able to pay higher rates and have better benefits which means they will pull the majority of the highest qualified candidates. Smaller organizations can’t offer the enormous benefits packages placing them in a difficult situation to fill needed positions. Picking a technology where there are fewer people to fill the role would seem to place those organizations at a disadvantage, but this is actuality in comparison to overall demand for those type of people. Looking solely at the number of potential applicants, the pool of functional programmers is smaller, but organizations using functional languages aren’t nearly as widespread, so they suffer less completion when searching for candidates. Furthermore, assuming the statistics surrounding the benefits of functional languages are correct, organizations will require fewer programmers accommodating the constraint of a smaller pool of applicants.


    Functional languages can be an excellent fit for organizations, both ones starting development and others which have been established for a considerable length of time. Most resilience in using them comes from misunderstanding the benefits compared to the cost of changing languages. It is neither difficult nor time consuming to attempt to better the development process by focusing on tools to better aid the process.

    Regular Expressions Presentation

    I have uploaded my notes from my presentation on Regular Expressions.  Currently, I am flushing out my presentation notes into a more readable format in the readme file, but I have uploaded everything now in case you want to get the raw notes early.  They can be found at:



    When was the last time you sat down and talked to your team about problems?  What was the last task or procedure you changed because it was a bad fit for the project?  The longer you wait, the worse it gets because the longer a team works together, the less likely someone is going to mention a difficulty or a frustration.  Once people get used a routine that is bearable, they will learn to live with it even though it’s uncomfortable, and it’s this situation which leads to frustration, turnover and burnout.

    I am fortunate to have a team who is vocal and willing to discuss issues.  The team is relatively good working through development scenarios where issues might arise.  Code deployments are automated, database updates are tested multiple times in varying scenarios, and every time an issue occurs that a change in process could help, we look at implementing it.  For a long time, we were diligent at applying this test and fix approach to everything except ourselves, and that is when the unexpected happened.  My team member completely shocked me and let me know that our communication with him was poor and he didn’t feel included.  He works remotely most of the time, and for the most part he was the only one.  The rest of the team would have conversations in hallways, etc., and to us this was the course of a normal day.  His knowledge and insights were being excluded, simply because we were making decisions about things we didn’t feel warranted an official meeting.

    This left us in an awkward position.  We either needed to end the possibility of working remotely, or we needed to rethink how we communicated on a daily basis.  Our problem was that most of us didn’t even know we had a problem, and that certainly meant we didn’t know how to fix it.  Most organizations handle problems like this by minimizing scenarios where they have issues.  We took the opposite approach and forced ourselves to confront it, understand it, and from this we created a new policy; all people must now work remote at least one day a week.  Why did we take this approach?  We understood that remote work is too important to give up. We also understood that unless we analyzed what was wrong it would never improve.  In the end, we would rather take time to fix a problem than keep suffering through it.  This approach to helped not only communication with but showed us other areas for improvement as well.


    For us to solve the problem, we must first all understand what it is.  With all people working away from the office, we can all see what problems there are with communication.  Each person can now look at how the team functions and provide a unique perspective on how to make it better.   We found that not only our communication with people off site improved, but with people on site as well.  We are now much more diligent about communicating ideas and decisions with everyone and we are much more cognizant about recording information where it can be accessed anywhere at any time.


    Despite any attempt at quiet working conditions, most offices are a chaotic place.  Programming requires concentration and several tasks are only easy to accomplish when someone can have several hours of uninterrupted work.  Common pieces of advice include, “put on headphones” or “book a meeting room and close the door.”  These are fine, but there is always the possibility people will cause interruption.  While at home, coworkers cannot do this allowing greater relaxation which leads to an easier ability to focus.  Now that each team member has at least one day where work can be uninterrupted, they commonly save long tasks for when not at the office.

    Disaster Recovery

    Being able to access key internal systems from home is not just for people who live too far away to be on location.  We depend on it in cases when people can’t drive to the office, or when emergencies arise when we don’t have time to make the commute.  During a crisis is not the time to find out your equipment doesn’t function.  With each member testing remote access on a weekly basis, we have a relativity high certainty that it will work when we need it to.   This is a tool used in emergencies just like redundant servers, or a secondary site.  You can always hope it works when necessary, but you won’t know until you try.


    Trust is something which everyone wants to believe exists but is often in short supply.  Most places have the capability to allow people to work remotely, but leaders often joke about their employees watching television instead. (I actually interviewed at a company where the hiring manager threateningly said that he’ll know if people aren’t doing work while they are remote.)  Allowing employees to work at home when necessary shows a level of implicit trust.  It tells employees management has enough faith in their work ethic that if its only once in a while the project won’t suffer too much.  Forcing someone to work remotely changes the narrative.  It becomes a common occurrence, and shows everyone they are trusted enough to do what they need to do.  Trust among a team is key.  It allows people to be open about issues and ideas for improvement, and without it teams will fail to improve.

    Quirks with Pattern Matching in C# 7

    With C# 7, Microsoft added the concept of pattern matching by enhancing the switch statement. Compared to functional languages (both pure and impure), this seems to be somewhat lacking in a feature by feature comparison, however it is still nice in allowing a cleaner format of code. With this, there are some interesting quirks, that you should be aware of before using. Nothing they’ve added breaks existing rules of the language, and with a thorough understanding how the language behaves their choices make sense, but there are some gotchas that on the surface looks like they should function one way, but act in a completely different manner.

    Consider the following example.


    C# 7 now allows the use of a switch statement to determine the type of a variable. It as also expanded the use of is to include constants including null.

    is can show if something is null : shows true

    With these two understandings, which line executes in the following code?

    Shows default code executed.

    Based on the previous examples, its a reasonable conclusion that the one of the first two case statements would execute, but they don’t.

    The is operator

    The is operator was introduced in C# 1.0, and its use has been expanded, but none of the existing functionality has changed. Up until C# 7, is has been used to determine if an object is of a certain type like so.

    This outputs exactly as expected. The console prints “True” (Replacing string with var works the exactly the same. Remember that the object is still typed. var only tells the compiler to figure out what type the variable should be instead of explicitly telling it.)

    Is Operator String: True

    What happens if the string is null? The compiler thinks its a string. It will prevent you from being able to pass it to methods requiring another reference type even though the value is explicitly null.

    Type is null

    The is operator is a run time check not a compile time one, and since it is null, the runtime doesn’t know what type it is. In this example, the compiler could give flags to the runtime saying what type it actually is even though it’s null, but this would be difficult if not impossible for all scenarios, so for consistency, it still returns false. Consistency is key.

    Printing out True and False is nice, but it’s not really descriptive. What about adding text to describe what is being evaluated.

    Is Type With Question, Question doesn't appear

    Why didn’t the question appear? It has to do with operator precedence. The + has a higher operator precedence than is and is evaluated first. What is actually happening is:

    This becomes clear if the clause is flipped, because the compiler doesn’t know how to evaluate string when using the + operator.

    Flipping clauses throws error.

    Adding parenthesis around the jennysNumber is string fixes the issue, because parenthesis have a higher operator precedence than the + operator.

    output of is operator and + flipped with parenthesis (shows both question and value)

    Pattern Matching with Switch Statements

    Null and Dealing with Types

    Null is an interesting case, because as shown during the runtime, it’s difficult to determine what type an object is.

    Base Example

    This code works exactly as how you think it should. Even though the type is string, the runtime can’t define it as such, and so it skips the first case, and reaches the second.

    Adding a type object clause works exactly the same way

    shows object case works same way

    What about var. Case statements now support var as a proposed type in the statement.

    If you mouse over either var or the variable name, the compiler will tell you what type it is.
    show compiler knows what type it is.

    Shows var case statement doesn't know type

    It knows what the type is, but don’t let this fool you into thinking it works like the other typed statements though. The var statement doesn’t care that the runtime can’t determine the type. A case statement with the var type will always execute provided there is no condition forbidding null values when (o != null). Like before, it still can’t determine the type inside the case statement statement.

    Why determine object type at compile time?

    At any point in time (baring the use of dynamic), the compiler knows the immediate type of the variable. It could use this to directly point the correct case concerning the type. If that were true, it couldn’t handle the following scenario, or any concerning inheritance of child types.

    shows is string

    Personally, I would like to see either a warning or an error, that it’s not possible for type cases to determine if the variable is null case string s when (s is null), but as long as the code is tested and developers knows about this edge case, problems can be minimized.

    All the examples can be found on github: