Rob Kraft's Software Development Blog

Software Development Insights

Archive for the ‘CodeProject’ Category

CI/CD – Is It Right For You?

Posted by robkraft on July 6, 2021

Prologue:  Software Practices Are Only Best Within Specific Contexts

Have you ever attended a conference session with a presenter that is very passionate about a new process?  They believe that everyone should be using the process.  Often, I have doubts that the process will work for my team, but the presenter seems to imply that I don’t fully understand what they are attempting to convey; and perhaps I just need to start trying to make the change to realize the value of it.  While this could be true, I have learned over the years that many people pitching process changes that provided enormous return on value for their software development teams, are pitching solutions that might not work well for other software development teams.  The presenters are not trying to sell me something I don’t need, and they are not being dishonest, they are usually just considering their solution from a single context.   

And what do I mean by context?  A context is the processes, people, tools, networks, languages, deployment models, and culture in which software is produced and deployed.  A practice that is optimal in one context may not be optimal in another, just like the best practices for building a house may differ from the best practices for building a bridge or a skyscraper.  Some aspects of context affect some best practices more than others.  For example:

  • Your deployment model is an aspect affecting the processes you choose.  It is possible to use the CI/CD process when you deploy to a web app, but impossible to use CI/CD when deploying software to chips that are embedded in the engines of airplanes.
  • StandUp meetings might be ideal for a scrum team with all members co-located and starting work at the same time; but a waste of time for a team of four all sitting at the same desk doing mob programming.

This series of articles looks at software practices one by one to highlight what contexts each practice may work well in, and contexts where the practice may not work well, or may even be counter-productive.

Here is an example of a context constrained best practice.  After graduating from college, a developer created static web sites for years, modifying html pages and JavaScript and committing code changes to GitHub while also maintaining a web server and uploading file changes to the web site using FileZilla.  But then he discovered Netlify which uses the JamStack process which makes it incredibly easy to commit his changes to GitHub and have them validated, compiled, optimized, and automatically deployed to the production web site.  Now he is telling everyone they should use JamStack and Netlify for all web development.  And perhaps they should, if they are deploying static public web sites.  But some developers are deploying to internal web sites.  Some developers have database changes and dependencies on code from other teams.  Some developers have mission critical applications that can’t risk ten seconds of downtime due to an error in production.  These are different contexts and the Netlify/JamStack approach may be undesirable for these contexts.

In short, any “best practice” applies within a specific context, and is unlikely to apply to all contexts.  We need to identify the best practices that we can apply to our contexts.  Some people are dogmatic about their ideas of best practices.  They believe a process should always be applied in all software contexts, but personally, I think there may be no software practice that is always best.  I think there are many contexts in which some process recommendations are poor choices.

Subsequent articles will examine processes one at a time, explaining why the processes are poor choices for some contexts.  The list of process recommendations that will be shown to be poor choices in some contexts include the following: Scrum, TDD, Pair-Programming, Continuous Deployment (CD), Story Points, Velocity, Code Reviews, Retrospectives, Management By Objectives (MBO), and metrics.

Main Content: Is CI/CD Right For You?

The acronym CI/CD stands for “Continuous Integration” and “Continuous Delivery” and also “Continuous Deployment”.  Usually, perhaps always, CI needs to be implemented before either of the CDs can be implemented.  CI refers to coders pushing code changes to a shared team repository frequently, often multiple times per day.  In many contexts, a build of the software is started once the push into the shared code base is complete.  In some contexts, builds of the shared repository may occur on a regular basis such as every hour.  The benefit of automated builds is to quickly identify cases in which the code changes made by two developers conflict with each other.

Along with running a build, many CI pipelines perform other tasks on the code.  These tasks include “static code analysis” aka “linting” to determine if the code follows code formatting standards and naming conventions as well as checking the code for logic bugs and bugs that create security vulnerabilities.  Static code analysis may detect if new and unapproved third-party libraries were introduced, or that code complexity levels exceed a tolerance level, or simply if a method has more lines of code within it than is approved by company standards.  If the code compiles and passes the static code analysis checks, it may then have a suite of unit tests and integration tests executed to verify no existing logic in the application was broken by the latest code changes.

CI encompasses two practices; frequent code pushes and automated builds.  Some of the benefits of frequent code pushes include:

  • Developers get feedback more quickly if they have made changes that conflict with something other developers committed,
  • Builds based on the shared repository are less likely to conflict with changes from other developers when the code base the developer is working from is more current and the number of changes made to the code is fewer,
  • Reviewing code of other developers is easier when developers check in frequently because it is likely there are fewer code changes to review,
  • Since code reviews take less time, they interrupt the flow of the developer reviewing the code less,
  • Since code reviews take less time other developers are more willing to perform the code review and are more willing to do it soon.

Some of the benefits of automated builds include:

  • A relatively quick response if code pushed to a shared repository causes a build failure,
  • Relatively quick feedback from static code analysis tools to identify problems,
  • Relatively quick feedback from unit tests and integration tests if code changes created bugs in the application.

CI is very beneficial and invaluable to many software development teams, but it is not necessarily a benefit in all software development contexts.  Some contexts that may gain little value from CI include:

  • When there is only a single developer on the project and a CI environment does not already exist,
  • When developers are exploring a new programming language for development.  In this case, the time required to set up the CI may be more than it is worth for something that might get discarded,
  • When the coding language is a scripting language and there is no compile step,
  • When a team has no automated build currently and creates the compiled version of their product on their local machines to deploy to production.

Should You Make a Process Change?

When evaluating if any process like CI is good for you, the most important factors in making the assessment are:

  • Is it worth my (our) time to implement the process or practice?
  • Will the process improve our product quality?

If the process does not improve quality, which means it doesn’t decrease bugs or decrease security vulnerabilities or improve application performance; and it takes significantly more time to implement the process than it will save by not implementing the process, then you should probably not implement it.

Should You Consider CI?

With this definition of CI and criteria for assessment, I believe that most development teams should consider implementing CI into their development process.

CD Benefits and Drawbacks

The other aspect of CI/CD is CD.  CD stands for either “Continuous Delivery” or “Continuous Deployment” or both.  Both terms imply that successful software builds get automatically pushed into an environment where it can be tested or used, but some teams prefer to use “Delivery” for non-production environments and reserve “Deployment” for the “production” environment.  Also, some teams don’t really automatically deploy the software to an environment after it successfully builds.  Instead, the deployment to an environment requires a button click by an authorized person to proceed.

CD has less adoption than CI, partially because CI is generally a pre-requisite to CD, but mostly because CD is not a good deployment strategy for many software products.  In fact, CD is not even possible for software embedded onto chips and placed into airplanes, cars, refrigerators, rockets, and many other devices.  Nor is CD practical for most desktop applications and applications delivered to phones and other devices through an app store.  CI/CD is most likely useful in the deployment of web applications and web APIs. 

Some of the benefits of Continuous Delivery to a non-production environment include:

  • It forces you to codify everything.  That means you need to figure out how to automate and automatically apply the environment specific configuration values.  You may need to learn how to use containers to simplify deployment and to version control and automate all aspects of devops,
  • It creates possibilities.  For example, once you have taken the time to create a pipeline where your automated build can flow into a deployed environment, you discover you can now easily automate your pen tests and stress tests against a test environment,
  • A developer can test some changes in a deployment environment that they can’t test in the developer’s environment (aka: works on my machine), such as:
    • Changes related to threading behavior in applications,
    • Changes related to users from multiple devices simultaneously using the application,
    • Stress testing the scalability of an application,
    • Testing features affected by security mechanisms such as OAuth and TLS.
  • Developers can get new features into functioning environments faster for the benefit of others,
  • Developers can deploy to testing environments easily without waiting on the devops team or IT team to perform a task for them,
  • It becomes easier to create a totally new environment.  Perhaps you currently have just production and test.  Now you can add a QA environment or a temporary environment relatively easily if all the deployment pieces are automated.

Some of the benefits of Continuous Deployment to a production environment include:

  • Developers can get new features and fixes to production faster.  This is very valuable for static sites.  This is a core feature of most JamStack sites.
  • Developers can deploy without waiting on the devops team or IT team to perform a task for them.

Some of the challenges of Continuous Delivery to a non-production environment include:

  • Applying changes to databases in coordination with changes to code,
  • Interruption of testing by others currently in process,
  • Insuring the correct sequence of deployments when multiple teams are deploying to a shared testing environment and some changes depend on sequence

Some of the impediments of Continuous Deployment to a production environment include:

  • The need for the software to be burned into a physical chip,
  • The need for the software to be published through an app store,
  • The need for lengthy integration testing and/or manual testing before production, often due to the need to insure the software is correct if it is used to keep people alive,
  • The desire by the company to find any problems with the software before customers interact with it.

Should You Consider CI/CD?

Should you adopt CI and/or CD?  That is a question you need to answer for yourself.  Not only should you consider the benefits and value CI/CD may bring to your development process, you should always also consider if adopting a process is more valuable than other changes you could make.  Just because we recognize the value in implementing a specific change doesn’t mean we should implement it right away.  Perhaps it is more valuable to your company to complete a project you are working on before implementing CI/CD.  Perhaps it is more valuable to wait six weeks for the free training sessions on TeamCity and Octopus Deploy to be available if those are the tools you are considering to use for CI/CD.  Perhaps you are considering moving from subversion to git.  If so, you should probably make that change before you build a CI/CD solution, otherwise you may need to rebuild your CI/CD pipeline completely.  Also, going from manual builds to CI/CD is unlikely to be something completed in a short time frame.  It is something you will progressively implement and adopt more aspects of over time.

I believe that most development teams should consider implementing “Continuous Delivery” into their development process.  “Continuous Deployment” to production is probably primarily of value to teams deploying static web sites, and maybe a few sites with small amounts of data or unimportant data.

Posted in CodeProject, Coding, Dev Environment, Process, Software Development | Leave a Comment »

An Example of Upgrading Code From Average to Excellent

Posted by robkraft on June 13, 2021

Average programmers get the job done.  Excellent programmers get the job done too, but the code of excellent programmers lasts longer and is easier to change to meet future requirements.  Below is an example of upgrading average code to excellent code.

Our team decided an application could benefit from caching some data in our .Net application.  We selected a caching framework with a good reputation and implemented it in one of our business objects.  Here is the code:

private static IAppCache _cache = new CachingService();
public static IList<Zone> AllZones
{
    get
    {
        return _cache .GetOrAdd("zones", () => ZoneManager.LoadAll(), new
            DateTimeOffset(DateTime.Now.AddHours(1)));
    }
}

The code worked and we noticed a performance improvement in our Zones class.  The code was easy to implement.  We decided we would implement something similar in several other classes.  However, the above code is just “average”.  What I mean by that is while “it works”, it could be made better.  Allow me to talk through many of the considerations carefully.

  1. If the code solved a problem and we desperately needed to get it into production right away to keep from losing thousands of dollars per hour, then we should probably deploy it right away.  The business need is more important than the developer’s preference for robust code.
  2. If this was the only place in the application that would need this type of caching logic and it is working well, then the code is probably “good enough” and needs no further consideration or enhancement.
    1. In fact, if this code works as is, and would never need to be modified again, spending additional time to enhance the code (from good to excellent) would actually be a failure.  It would be a waste of time, assuming that the developer(s) involved had alternative activities they could engage in that would add more value to the software or the developers skills than refactoring the above code.
    2. If the above code exists specifically for a short-term project, and it will be deleted and no longer used within a week, and refactoring would not improve performance, but would just make the code easier to maintain in the future, but the code really has no future; then refactoring the code is pretty much a waste of time.

In our application, the above caching code was the start of a pattern for caching we wanted to implement in other business objects.  Whenever you are starting some code that will be the pattern by many developers across the code base, you probably want to think about improving the code in the following ways:

  1. Make the code as easy to implement in each place as possible.  That means:
    1. minimizing the references you need to add for each implementation;
    2. minimizing the properties and methods and supporting features you have add each time you use it;
    3. minimizing the code changes you need to make as you copy/paste it from one place to another.
  2. You want developers to fall into the Pit of Success.  This means you want to make it easy for everyone using the pattern to get it right.  The fewer changes they have to make, and the more obvious those changes are, the more likely the other developers will successfully implement similar code in other places.

We were aware that the framework we used for caching could change.  We started with a framework called LazyCache that uses the IAppCache interface and CachingService, which you can see in the code above.  But what would happen if we decided we needed a different caching framework in the future?  We might have to go back to each business object and change the interface we used, and probably replace CachingService.  Can we refactor this code to make it unlikely the code in the business object needs to change if we decide to use a different caching technology such as Redis?  I believe the answer is ‘yes’.  Our goal then becomes the following:

  1. Move all the code specific to our caching technology (LazyCache in this story) into a separate class, so that our business objects are totally unaware of the technology used for caching.
  2. Pass all the values and method needed for the caching feature from the business object to the new caching class we create.
  3. Minimize the things we need to pass to the caching framework to make the code easy to implement in each business object.
  4. Write the caching class in a way so that it also does not need to know much about the business objects that it is caching.  Make sure that it does not need to reference those business objects.
  5. Write code so that the caching class does not need to be changed when it gets used by additional business objects.  It can become a “black box” to future developers.

The revised code in the business object looks like the following:

public static IList<Zone> GeoZonesFromCache
{
    get
    {
        return CacheManager.GetOrAdd(CacheManager.ZONES, () => ZoneManager.LoadAll());
    }
}

Notice that it no longer has any reference to the caching service.  This should allow us to change the way the data is cached from using LazyCache to some other Caching framework including caching services like Redis Cache without needing to modify each business object.  We pass the minimum information which includes the name of the cache (which is a ReadOnly string named CacheManager.ZONES), and also the function to run (ZoneManager.LoadAll) if the cache is not already populated.

We did have to write more code in our CacheManager.  Here is the code:

using LazyCache;

namespace MyApp
{
    public static class CacheManager
    {
        private static DateTimeOffset GetCacheDuration(string cacheType)
        {
            return new DateTimeOffset(DateTime.Now.AddHours(1)); 
        }

        private static IAppCache _cache = new CachingService(); 

        public static IList<T> GetOrAdd<T>(string cacheType, Func<IList<T>> itemFactory)
	{
	    return _cache.GetOrAdd(cacheType, itemFactory, GetCacheDuration(cacheType)) as IList<T>;
	}

        public static readonly string ZONES = "zn";
    }
}

In the code above I include a using statements, for LazyCache.  This is the only class in the entire application that references LazyCache.  If we decide we want to replace LazyCache with Redis cache or some other cache then this one class should be the only place in our app where we need to make a change.

Is the code that we wrote perfect?  The answer to that is definitely ‘Maybe’.  We can’t know if existing code is perfect until time has passed and we determine if it met our needs.  I believe that the code has to meet these criteria to be considered perfect:

  1. If the code never needs to be changed, then it was perfect,
  2. If the code does need to be changed, but it is easy for the developer to change in order to adapt to a requirement change, then it could still be considered perfect when originally written, especially if the changes are isolated to the CacheManager class and don’t need to be made in each business object that uses it.
  3. If changes are needed in each business object, but the changes are easy to implement and were not foreseen when the code was first written, then you could still consider the original code to be perfect.  For example, perhaps some business objects need to pass the cache duration into the CacheManager service.  Assuming that code is easy to implement, and it was not originally expected and coded for, then the original code could still be considered perfect. 

Some of you may identify that improvements to the CacheManager are still possible, and that is certainly true.  One improvement would be a change to make it easier to write unit tests for the business objects using CacheManager.  The code I have above is hard-coded to use the “CachingService”.  It could be helpful if the “CachingService” could be mocked away in unit tests, especially if you replace the CachingService with Redis Cache.  Fortunately, given that the caching code is contained within a single class, it would be fairly simple to change the CacheManager to use an IOC framework.  I won’t delve into that here other than to point out that part of writing excellent code might include the ability to unit test that code you have written.  I will also point out that the refactored code (from “Average” to “Excellent”) makes the writing of unit tests to test the CacheManager itself easier.

You may also notice that the code contains no error handling.  I did not include error handling because it would add clutter that is irrelevant to the topic of this article, and also because some programmers may prefer errors to bubble up the call stack to be caught elsewhere in the application.

To recap this article, if you want to go from being an average developer to an excellent developer in a scenario like this you should do the following:

  • Before writing code, first identify if there is already a solution to the problem in your code base and determine if you can reuse the same solution.
  • When you are writing code, ask yourself if part of the code could be re-used in other places in the application, or perhaps in the future.
  • Take some extra time to write the code in a way to make the pattern easy to implement correctly in every place it will be implemented (or at least in many of the places).
  • Write the code in a way to minimize the need for modifications to each business object if changes to the service being implemented are needed.
  • Make the places where the pattern is implemented unaware of the details of how it is implemented.  In other words, the business objects have no idea what caching technology is used and won’t need to be altered if that caching technology is changed.
  • Make the pattern unaware of the specifics of the objects that are using it.  In other words, the CacheManager does not need to be able to reference and understand the business objects that are calling it.  There is no dependency there.
  • Don’t pass hard-coded strings.  Use an Enum or string constants.  This allows you to change the string value in a single place if you ever need to do so, and, more importantly, insures there are no string typos made.  In the class above, this was done for the CacheType (CacheManager.Zone).  We could have passed a literal string “zn” into the CacheManager, but we anticipate adding logic, perhaps a switch statement based on the cacheType to obtain the duration for the cache, which means we would have a second place (the switch statement) also with a hardcoded string of “zn”.  By using a readonly variable we eliminate the risk of typos for that string.

This is not an article about caching.  This is an article about taking code that is average and making it better, using the implementation of some caching logic as an example.  Some of you may notice that our change to make the code better is also an example of “Separation of Concerns”.  This is true, but the article is not about “Separation of Concerns”, it is simply about writing  better code.  Finally, I will point out that this example is also an example of the “Façade” pattern, but again, this is not an article about the “Façade” pattern, just an article about better code.

I hope this article helps some average developers along their journey to becoming excellent developers.

Posted in Code Design, CodeProject, Coding | Leave a Comment »

C# .Net SQL Injection Detection – Especially for Legacy .Net Code

Posted by robkraft on March 4, 2020

When you have an existing .Net code base full of SQL statements, and you want to reduce the chance that there are SQL injection risks in the code, you may decide to perform a review of every SQL statement in order to confirm that they are all coded correctly; or you may hire another company to do this for you. But one problem with this approach is that the code is only “SQL Injection Free” from the moment the review is completed until people start modifying and adding to the code again.

What you should strive for is a way to make sure every past and future SQL statement gets tested for SQL Injection risk before it runs.  That is what this sample code provides you.  If you follow the patterns described here, I believe you can significantly reduce the risk that your code has bugs leading to SQL Injection and it will stay that way going forward.

Using the Decorator Pattern to Provide a Place To Add SQL Injection Detection

The primary technique I recommend in this article for adding SQL Injection detection into your application is to stop using the .ExecuteReader and .ExecuteNonQuery methods.  Instead, use the Decorator pattern to create your own method to be called in place of those two, and that method will include code to do some SQL Injection detection.

Replace:

SqlDataReader reader = command.ExecuteReader();

With 

SqlDataReader reader = command.ExecuteSafeReader(); //provided in sample code

The sample code provided behaves like the Proxy pattern in that it will make the actual call to the database after finding no SQL Injection risk.  The benefit of this approach is that you can then regularly scan your entire code base for the use of .ExecuteReader and .ExecuteNonQuery knowing that there should be no cases of those methods, other than the exception cases you expect.  Thus you can be sure that the majority of your code is running through your SQL Injection detector.

Another benefit of using the Decorator pattern to implement SQL Injection Detection is that you can also easily add other features such as:

  • Logging every SQL that is executed
  • Logging and blocking every SQL that is a SQL Injection risk
  • Altering every SQL on the fly.  One scenario where this could be helpful is that if you renamed a table in the database but had a lot of SQL that needed to change.  You could possibly add a find/replace to every SQL on the fly to change the table name, allowing you more time to find and correct all stored SQL fragments with the old table name.
	public static SqlDataReader ExecuteSafeReader(this SqlCommand sqlcommand)
	{
		if (!sqlcommand.CommandType.Equals(CommandType.StoredProcedure))
		{
			var sql = sqlcommand.CommandText;
			//Options: You could Add logging of the SQL here to track every query ran
			//Options: You could edit SQL - for example if you had renamed a table in the database
			if (!ValidateSQL(sql, SelectRegex))
				return null;
		}

		return sqlcommand.ExecuteReader();
	}

The SQL Injection Detection Code

Warning!  This does not detect all forms of SQL Injection, but it will detect most of them.  Here is what causes the class to throw an exception:

  • Finding a single apostrophe (single quote) that does not have a matching single apostrophe (single quote)
  • Finding double quotes that do not have a matching double quote.  This is only needed if the SQL Server has SET QUOTED_IDENTIFIER OFF.  However, you may also want to use this if your database is MySQL or some other DBMS.
  • Finding a comment within the SQL
  • Finding an ASCII value great than 127
  • Finding a semicolon
  • After extracting the strings and comments, finding any of a specific configurable list of keywords in a SELECT statement such as DELETE, SYSOBJECTS, TRUNCATE, DROP, XP_CMDSHELL

The code is written to be easy to change if you don’t want to enforce any of the rules above, or if you need to add similar rules because you have a special scenario or a DBMS besides SQL Server.

The code uses the regex [^\u0000-\u007F] to reject the SQL if it contains any non-ASCII characters.  This works for the applications I have written, but may need alteration for non American English language support.

The code also uses regexes to check SQL statements for undesirable keywords.  One regex is for SELECT statements and therefore blocks them if they contain INSERT, UPDATE, or DELETE.  Other keywords that may indicate a SQL Injection attempt are also rejected and that list includes waitfor, xp_cmdshell, and information_schema.  Note that I also include UNION in the list; so if you use the UNION keyword you will need to remove that from the list.  UNION is frequently used by hackers attempting to perform SQL Injection.

private static void LoadFromConfig()
{

	_asciiPattern = "[^\u0000-\u007F]";
	_selectpattern = @"\b(union|information_schema|insert|update|delete|truncate|drop|reconfigure|sysobjects|waitfor|xp_cmdshell)\b|(;)";
	_modifypattern = @"\b(union|information_schema|truncate|drop|reconfigure|sysobjects|waitfor|xp_cmdshell)\b|(;)";
	_rejectIfCommentFound = true;
	_commentTagSets = new string[2, 2] { { "--", "" }, { "/*", "*/" } };
}

SQL Server supports two techniques to comment out SQL code in a SQL Statement, two dashes, and enclosing the comment in /* */.  Since it is unlikely that developers write SQL to include comments, my default choice is to reject any SQL containing those values.

Exactly How Is The SQL Injection Detected?

There are basically three steps in the SQL Injection detection process.

First, the code checks for any ASCII values above 127 and rejects the SQL if one is found.

Second, the code removes all the code withing strings and comments.  So an SQL that starts out looking like this:

select * from table where x = ‘ss”d’ and r = ‘asdf’ /* test */ DROP TABLE NAME1 order by 5

becomes this:

select * from table where x = and r = t DROP TABLE NAME1 order by 5

Third, the code looks for keywords, like “DROP” and “XP_CMDSHELL”, in the revised SQL that are on the naughty list.  If any of those keywords are found, the SQL is rejected.

Formatting Methods included in the SQLExtensions Class

The SQLExtensions class provides additional methods to help your coders reduce the risk of SQL Injection.  These methods help coders format variables in SQL when doing so with a parameter is not an option.  The most useful of these methods is FormatStringForSQL and it could be used as shown here to enclose a string in SQL quotes as well as replace any single quotes contained within the value with two single quotes.


string sql = "select * from customers where firstname like " + nameValue.FormatStringForSQL();

Another advantage of using a method like this is that it makes it easy for you to change how you handle the formatting of strings everywhere within your code if you discover that you need to make a change.  For example, perhaps you decide to move your application from SQL Server to MySQL and therefore that you also need to replace double quotes in addition to single quotes.  You could make the change within this method instead of reviewing your entire code base to make the change one by one for each SQL.

Custom .Net Exception Class

I also provided a custom Exception primarily to show how easy it is to implement custom exceptions and because I think it is useful for this extension class.  This provides you more flexibility for handling exceptions.  You can catch and handle the exceptions raised specifically due to SQL Injection risk different than exceptions thrown by the underlying ADO.NET code returned from the database.


[Serializable]
public class SQLFormattingException : Exception
{
	public SQLFormattingException() {}

	public SQLFormattingException(string message): base(message) {}
}

The Rules For Detecting SQL Injection are Configurable

I made enabling/disabling configuration of the SQL Injection detections easy to change so that you could import those rules at runtime if desired so that different applications could have different rules.  Perhaps one of your applications needs to allow semicolons in SQL but the others don’t.  It is a good practice to implement the most stringent rules you can everywhere you can.  Don’t implement weak SQL Injection detection rules everywhere because a single place in your code needs weaker rules.  The rules are “Lazy Loaded” when needed, then cached, to support the ability to change them while an application is running by calling the InvalidateCache method provided.

Below is an example of one of the rules.  You can configure your code to reject the SQL if it contains SQL Server comments.


#region RejectComments Flag
private static bool? _rejectIfCommentFound = null;
public static bool RejectIfCommentFound
{
	get
	{
		if (_rejectIfCommentFound == null)
		{
			LoadFromConfig();
		}
		return (bool)_rejectIfCommentFound;
	}
}
#endregion

Steps To Implement and Use This Code

I suggest you take the following steps to implement this class:

  1. Get the SQLExtensions.cs class file into a project in your code base. You will also need the CustomExceptions.cs class file.  The program.cs just contains a sample usage and there is also a UnitTest1.cs class.
  2. Comment out all the lines in ReallyValidateSQL except for the “return true”
  3. Do a find and replace across your entire code base to replace ExecuteReader with ExecuteSafeReader
  4. Compile and test.  Your app should still work exactly the same at this point.
  5. Review the Customizable Validation Properties and decided which ones you want to implement, then uncomment the lines you commented out in ReallyValidateSQL
  6. Decide if you need to and want to replace dynamically constructed SQL in your application with any of the four FormatSQL… extension methods provided.
  7. Provide me feedback

MIT FREE TO USE LICENSE

This code has an MIT license which means you can use this code in commercial products for free!

A link to the source code example is here: https://github.com/RobKraft/SQLInjectionDetection

Posted in Code Design, CodeProject, Coding, Security | 2 Comments »

SQL Server’s sp_executesql Does Not Protect You from SQL Injection

Posted by robkraft on August 18, 2019

Many coders of SQL have learned we can dynamically construct SQL statements inside of stored procedures and then execute the constructed SQL.  In Microsoft’s SQL Server product there are two commands we can choose for running the constructed SQL:

  • EXEC (EXEC is an alias for EXECUTE, both do the same thing).
  • sp_executesql.

We SQL Server “experts” often advise coders to use sp_executesql instead of EXEC when running dynamically constructed SQL statements to reduce the risk of SQL Injection, and this is good advice.  But it is not the use of sp_executesql that prevents SQL injection, it is the use of parameters with sp_executesql that helps protect against SQL Injection.  You can still construct SQL dynamically and run that SQL using sp_executesql and be affected by a SQL Injection attack.

If you use parameters to substitute all the values in the SQL and then use sp_executesql you have probably eliminated the SQL Injection risk; but as a developer this means you may be unable to dynamically construct the SQL you want to run.

When you use sp_executesql parameters correctly, you can only replace data values in your SQL statement with values from parameters, not parts of the SQL itself.  Thus we can do this to pass in a value for the UserName column:

declare @sql nvarchar(500)
declare @dynvalue nvarchar(50)
select @dynvalue=’testuser’
SET @sql = N’SELECT * FROM appusers WHERE UserName = @p1′;
EXEC sp_executesql @sql, N’@p1 nvarchar(50)’, @dynvalue

But the following code will return an error when trying to pass in the name of the table:

declare @sql nvarchar(500)
declare @dynvalue nvarchar(50)
select @dynvalue=’appusers’
SET @sql = N’SELECT * FROM @p1′;
EXEC sp_executesql @sql, N’@p1 nvarchar(50)’, @dynvalue

Msg 1087, Level 16, State 1, Line 1
Must declare the table variable “@p1”.

If you are dynamically constructing SQL, and you are changing parts of the SQL syntax other than the value of variables, you need to manually write the code yourself to test for the risk of SQL injection in those pieces of the SQL.  This is difficult to do and probably best handled by the application calling the stored procedure.  I recommend that the calling program do the following at a minimum before calling a stored procedure that dynamically constructs SQL:

  1. Validate the length of the parameter. Don’t allow input longer than the maximum length expected.  If the stored procedure allows a column to be passed in that is used for sorting in an ORDER BY clause, and all of your column names are less than or equal to 10 characters in length, then make sure that the length of the parameter passed in does not exceed 10 characters.
  2. Don’t allow a single single quote, make sure to replace a single single quote with two single quotes.
  3. Don’t allow other special characters or even commands such as a semicolon or the UNION keyword or two hyphens that represent a comment in SQL.
  4. Don’t allow ASCII values greater than 255.

That short list is not sufficient to prevent all SQL Injection attacks, but it will block a lot of them and gives you an idea of the challenge involved in preventing SQL Injection attacks from being effective.

If you would like to see for yourself how the EXEC and sp_executesql statements behave I have provided a script you can use to get started with.  Related to this article, the most important query to understand is the last one because it shows a case of SQL injection even though the dynamically generated SQL is ran using sp_executesql.

–1. Create tables and add rows
DROP TABLE InjectionExample
GO
DROP TABLE Users
GO
CREATE TABLE InjectionExample ( MyData varchar (500) NULL)
GO
INSERT INTO InjectionExample VALUES(‘the expecteddata exists’), (‘data only returned via sql injection’)
GO
CREATE TABLE Users( username varchar(50) NULL,[password] varchar(50) NULL)
go
INSERT INTO Users VALUES (‘user1′,’password1’), (‘user2′,’password2’), (‘user3′,’password3’)
GO
–2. Run a test using EXEC with data the programmer expects
declare @sql nvarchar(500)
declare @p1 nvarchar(50)
select @p1 = ‘expecteddata’
select @sql = ‘SELECT * FROM InjectionExample WHERE MyData LIKE ”%’ + @p1 + ‘%”’
exec (@sql)–returns 1 row as expected
GO

–3. Run a test using EXEC with data the hacker used for sql injection
declare @sql nvarchar(500)
declare @p1 nvarchar(50)
select @p1 = ”’ or 1 = 1–‘
select @sql = ‘SELECT * FROM InjectionExample WHERE MyData LIKE ”%’ + @p1 + ‘%”’
exec (@sql)–returns all rows – vulnerable to sql injection
GO

–4. Run a test using sp_executeSQL to prevent this SQL Injection
declare @sql nvarchar(500)
declare @p1 nvarchar(50)
select @p1 = ‘expecteddata’
select @sql = N’select * from InjectionExample WHERE MyData LIKE ”%” + @param1 + ”%”’
exec sp_executesql @sql, N’@param1 varchar(50)’, @p1
GO

–5. Run a test using sp_executeSQL to prevent this SQL Injection – hacker data returns no results
declare @sql nvarchar(500)
declare @p1 nvarchar(50)
declare @pOrd nvarchar(50)
select @p1 = ”’ or 1 = 1–‘
set @pOrd = ‘MyData’
select @sql = N’select * from InjectionExample WHERE MyData LIKE ”%” + @param1 + ”%” order by ‘ + @pOrd
exec sp_executesql @sql, N’@param1 varchar(50)’, @p1
GO

–6. But sp_executesql does not protect against all sql injection!
–In this case, sql is injected into the @pOrd variable to pull data from another table
declare @sql nvarchar(500)
declare @p1 nvarchar(50)
declare @pOrd nvarchar(50)
set @p1 = ‘expecteddata’
set @pOrd = ‘MyData; SELECT * FROM Users’
select @sql = ‘select * from InjectionExample WHERE MyData LIKE ”%” + @param1 + ”%” order by ‘ +@pOrd
exec sp_executesql @sql, N’@param1 nvarchar(50)’, @p1

 

 

Posted in CodeProject, Security, SQL Server | Leave a Comment »

What Makes A Software Programmer a Professional?

Posted by robkraft on June 16, 2019

Many people write code, but not everyone that codes considers themselves to be a professional programmer.  What does it take to be a professional?  This article lists the practices you undertake when you are a software development pro.  From my experience, many of these practices are not taught in coding schools, but they are essential to delivering quality software to customers.

Before covering the practices, let’s first briefly consider different classes of those who write code:

The Tinkerer
A tinkerer is a person that writes a few lines of code here and there, perhaps a macro in Excel, perhaps connecting services in IFTTT or Microsoft Flow, or perhaps a script to run in a database. A tinkerer may also use a language like basic or javascript in order to create a program or web site for their own personal use and enjoyment.

The Amateur

A programmer becomes an amateur instead of a tinkerer when the programmer starts writing software or web sites for others, especially when the programmer is compensated for their work.  Amateur programmers often create good software and write code well.  But to be considered professional, there are several practices a programmer will follow.

The Professional

The following lists represents what I believe are practices that every professional software developer will follow.  As in all things, there may be some cases where it makes sense that one or two of these practices is not performed by a professional; but I believe most would agree that professionals follow all of these practices most of the time.

  1. Use version Control
  2. Back up everything off site
  3. Track the changes, fixes, and enhancements in each release
  4. Keep the source code related to each deployed version that is in use
  5. Keep a copy of your software in escrow
  6. Use automated builds
  7. Schedule builds
  8. Write regression unit tests
  9. Use a bug tracking system
  10. Use a system to track tasks and features being developed
  11. Keep customers informed about the progress of the software development
  12. Keep third party software used updated regularly
  13. Understand the security risks
  14. Ensure proper compliance with industry standards such as PCI, HIPAA, SOX, and PII
  15. Educate yourself continuously
  16. Invest in your development tools
  17. Properly license development tools and software
  18. Write documentation about the software
  19. Keep a journal of your activity

1. Use Version Control

A version control system should be used by all professional software developers.  It is difficult to imagine a solution for which version control would not apply.  Today, most developers use GIT, but many also use Subversion (SVN).  The version control system used should have the following features:

  • Allow people to add comments explaining each check in, and track who checked it in
  • Allow people to view a history of changes checked in for each file
  • Allow people to revert to earlier versions of the software
  • Allow people to compare the code changes made across check ins

Zipping up the files that are part of each product release or check in, instead of using a version control system, is a sign of an amateur programmer.

2. Back Up Everything Off Site

A professional programmer will make sure that all of the code they write is backed up regularly, and that back up needs to be an offsite location to prevent loss of all of the source code in the event of a fire, flood, theft, or some other event.  Given the ubiquity of the Internet today this is usually easily achieved.  Simply using an online repository like github or bitbucket for version control almost fully meets this practice in most cases.  Along with the backups of the source code, a professional will make sure that that the scripts and commands for the tools and processes used to build the software is also backed up remotely.  A professional programmer should be able to start from nothing, acquire a new computer, and reconstruct everything needed to continuing developing the software by recovering all of the software from the off site backup.

3. Track the Changes, Fixes, and Enhancements in Each Release

A software professional provides more than software to their clients, they also provide a list of all the new features included in the latest version.  Of course there should also be a comprehensive list of the contents of the initial version.  In addition, a list of bugs fixed and other changes such as noticeable enhancements to performance and security should be provided.  A software professional tracks the reason behind every code change made during a release.  This is often helpful when clients are looking for a feature or fix because you can tell them they need to upgrade to a specific version to get it.

4. Keep the Source Code Related to Each Deployed Version That is in Use.

Modern version control software allows us to see how the code looked at any point in time.  A professional should be able to easily and quickly see how the code looked for any version of the software that any client currently has implemented.  This helps the developer resolve issues reported by clients more quickly, and to create a fix for the software.

5. Keep a Copy of Your Software in Escrow

No one likes to ponder the possibility of their own demise, but as a software professional you should consider this possibility and take steps to ensure that your clients can continue to use your software if something unfortunate happens to you.  Make sure that someone besides yourself can obtain access to your source code and all the artifacts and processes needed to build and support it if the need arises.

6. Use Automated Builds

Building software takes us from the source code to the final output that is deployed to a client or to a production system.  While builds often start out simple they can evolve to become more complicated.  For this reason, a professional developer will set up an automated build process that compiles and combines all the source code to a package ready to deploy.  Minor tasks that can be automated so that they are not forgotten include flagging the code as a release build so that the compiler will optimize the output, or minimize it, or obfuscate it as needed.  Automated builds often update version numbers and also often perform several steps in a specific sequence.  Build processes can be automated initially with simple batch/command files; but many professional use tools specifically designed for building software products.

7. Schedule Builds

Scheduling a build is just the next step, and hopefully a minor step, after a professional has created an automated build.  Many developers schedule a build to automatically run once or twice a day.  This is especially advantageous when multiple developers are contributing to the code.  Some developers even configure the version control system to start a build every time code is checked in.  Frequent builds help developers identify bugs more quickly if an artifact necessary for the build was excluded from their repository checkins, or if code they checked in adversely affected code in another part of the system.

8. Write Regression Unit Tests

I’ll admit that there may be a few cases where it does not make sense to write unit tests for your software.  But I believe that in almost all cases, even for old languages like Cobol and Visual Basic, a developer will write unit tests to validate important logic in the software runs correctly and is not inadvertently altered by a new enhancement or related bug fix.  Getting a unit test framework up and running takes a few hours or even a few days for someone not familiar with unit tests, but once you have it, you find that the tests give you a lot of freedom to make changes and the peace of mind that the changes you make are not breaking existing logic that your customers depend upon.

9. Use a Bug Tracking System

Let’s face it, almost all software has bugs, or things that are not working quite as the user desires.  Software developers need to track when those bugs were discovered and who discovered them so that they can make a careful fix and provide a patch or fix to the affected systems and users.  A bug tracking system can help all customers and users of your software become aware of the bug and sometimes the way to workaround a bug until a fix is applied.  It also can let your customers know that they just need to upgrade to your latest version to obtain the fix for the bug.  In addition, it helps you keep track of when and where you made the fix so that you can manage related issues with the fix, follow up issues, or just to reminder yourself that you already fixed this bug.

10. Use a System to Track Tasks and Features Being Developed

Software professionals understand that delivering quality software includes keeping track of a lot of details.  A system and place to keep track of all of the things that need to be done is very valuable.  Your system could be something as simple as a Trello board.  Ideally you will have a list of everything yet to be completed and everything yet to be done.  Most tasks boards use at least three columns: To Do, Doing, and Done.  Even when you think a task is done you may decide it is not really done if you have not written the documentation related to the feature, or altered your install package to include a dependency for the feature.  The system also helps you remember the status of items and share the status, or share the work to be done, with others.

11. Keep Customers Informed About the Progress of the Software Development

In most software development projects, frequent interaction with the end user, aka ‘the customer’ greatly increases the chance that the software you create meets their needs.  Frequent communication also helps you manage your customer’s expectations about when features will be delivered to them and how those features will behave.  Although the waterfall method of software development describes collecting all of the requirements up front, it is extremely rare that you really can gather all of the details as well as you can if you keep the customer involved as you develop and can show them the product as it evolves.  Professional software developers will not leave their customers in the dark for weeks or months before giving them an update about the progress of the software.

12. Keep third party software used updated regularly

Most developers rely on some third party software to decrease the time it takes to produce the software their clients want.  For web developers, this may include frameworks like Microsoft .Net, Angular, React, JQuery, and Java.  But security flaws and performance problems, and other bugs occasionally get discovered in these frameworks, therefore a software professional regularly updates to the latest version of the frameworks in order to obtain the security patches and fixes to improve the security and performance of the software they pass on to their client.

I recommend you keep a list of all the third party software used by your software and your software development processes, and that you review the list at least twice a year in order to update each dependency to the latest version.

13. Understand the security risks

Professional software developers understand that they generally have more knowledge of software development than the customers that have hired them to write code.  Thus they understand that writing secure code, code that can’t be easily abused by hackers, is their responsibility.  A software developer creating web applications probably needs to address more security risks than a developer writing embedded drivers for an IOT device, but each needs to assess the different ways the software is vulnerable to abuse and take steps to eliminate or mitigate those risks.

Although it may be impossible to guarantee that any software is immune to an attack, professional developers will take the time to learn and understand the vulnerabilities that could exist in their software, and then take the subsequent steps to reduce the risk that their software is vulnerable.  Protecting your software from security risks usually includes both static analysis tools and processes to reduce the introduction of security errors, but it primarily relies upon educating those writing the code.

OWASP (https://owasp.org) is a good resource for developers to learn about possible vulnerabilities and ways to mitigate those vulnerabilities.

14. Ensure Proper Compliance with Industry Standards such as PCI, HIPAA, SOX, and PII

Writing software that complies with industry regulations is also a responsibility of a software professional.  It is the responsibility of the customer asking for the software to tell the developer that they need the software to meet specific regulations such as HIPAA, GDPR, PCI, SOX, or PII.  But is the responsibility of the software professional to understand how those regulations affect the software and software development processes.  A customer may suggest to the developer what impact the regulation has on the code, but if you are a software professional, you will refer to the regulation directly and clarify your own interpretation of the document.

15. Educate Yourself Continuously 

Technology continually changes thus professionals will continually learn new tools, techniques, and software languages in order to improve their efficiency and write software that lasts longer.  Even developers that have been programming in the same language and environment for a decade may discover that there are new tools and processes that can help them write better code.  Perhaps need static code analysis tools can help you find bugs in your code, or perhaps you can learn to write better code by switching from a waterfall methodology to an Agile approach.  Developers writing Visual Basic 6 code may realize they can begin writing more modular code and use classes to facilitate a possible rewrite to Java that is coming, but is still years away.  If you aren’t occasionally doing some research to find ways to improve, I don’t think you can consider yourself to be a professional software developer.  (If you are reading this article, you probably are a professional or planning to be one!)

16. Invest in Your Development Tools

A good carpenter doesn’t use a cheap saw, and a good software developer doesn’t just use the cheapest tools and equipment.  Luckily for software developers, many really good tools are free to use.  Some tools do cost a little bit of money, but the productivity improvement gained by paying for them is often well worth the price.

Besides software tools, developers sitting at computers writing code day after day probably benefit from having good hardware.  A computer with a lot of RAM and a fast hard drive and Internet connection may spare you waiting minutes or even hours each day.  Multiple monitors, a comfortable keyboard, mouse, and chair can contribute to your ability to write code a little more effectively each day.  Take time to invest in the small things that can improve your productivity, even if it is just by a small amount.

17. Properly License Development Tools and Software

Customers expect people they hire to work ethically.  Well, perhaps unethical customers don’t expect ethical behavior or are willing to turn a blind eye, but if you declare yourself to be a professional software developer I believe you are also declaring that you do things ethically, and part of being ethical is paying for the resources you use to develop the software solution.  This means that you will pay the correct price and licensing fees for the tools you use for development.  It also means that you won’t use the free or community version of tools to develop professional solutions that clearly don’t qualify for the rules of use for that version of the development product.  Just as you are probably making a living writing software for your customers, those people and companies that created the tools you are using to enhance your productivity are trying to make a living by writing their software too.

18.  Write documentation about the software

Professionals write documentation about the software to assist future developers that may need to take over maintenance and enhancements of the code.  Sometimes, that future developer could be yourself.  System diagrams, flow diagrams, use case diagrams, and comments in code explaining the complicated bits all go a long way toward helping future developers maintain the software if you are not around to do so.

19.  Keep a journal of your activity

Professionals often keep notes of their activities during software development.  The notes can serve several purposes, but most often they benefit yourself.  Perhaps you record why you chose one approach over another.  Perhaps you list expected benefits or drawbacks of a decision.  Perhaps you keep track of how often you perform IT maintenance or how you fixed some problems.  You may keep track of interactions with others and a record of tasks and responsibilities.  Professionals also use these notes to help explain where they spent their time and to explain why development is behind schedule, for those times when that happens.

Summary

I wrote this for people that are new to software development, particularly those who have completed a program in writing software and hope to embark on a career as a professional.  Most of the items covered in the article are not covered by formal software education programs, but are an essential aspect of writing good quality software.  Individuals writing software for a living on their own probably want to implement all of these practices in order to give current and potential clients confidence in the quality and professionalism of their work.

Please let me know if you feel something important is missing from this list so that we may improve this article as a good reference for developers.

And please check back for a future article where I cover those practices that make a software development “team” a team of professionals.

Posted in CodeProject, Coding, Process, Project Management | 2 Comments »

Use A Google Sheet To Send Reminder Emails To Your Team For Free

Posted by robkraft on May 26, 2019

A lot of small teams could use reminder emails when it is time for a team member to perform a task, but there are not a lot of products where you can easily set up reminder emails for team members for free.

But you can do it easily with a Google Sheet.

Building on the work of others I created this little script you can copy/paste from https://github.com/RobKraft/GoogleSheetBasedEmailReminders

Open the Script Editor from the Tools menu of your Google Sheet and paste this script in.  The code is simple and documented if you desire to change it.

Then set up 4 columns in your google sheet.  Make row one headers for the 4 columns:

  • Column A: Email Address – this is a single email address or comma separated list of email addresses to send to
  • Column B: Reminder Begin Date – this is the date at which the reminder will start going out daily Column
  • C: Subject – This is the subject of the email
  • Column D: Email Body – This is the body of the email. Also the code adds some extra stuff to the body of the email.

You also need to create a trigger in your google sheet.

To do this, select the Edit menu from the script menu and select Current Project Triggers. You may need to give your project a name and save it at this point. Add a trigger. At the time of this writing in May 2019, you would need to set these values for your trigger:

  • “Choose which function to run” – probably sendEmails
  • “Choose which deployment to run” – probably Head
  • “Select event source” – Time-driven
  • “Select type of time based trigger” – Day Timer – for once per day
  • “Select Time of Day” – During what time frame do you want the trigger to run. (GMT Time)

That is it – save that trigger and it is all yours.  Set up an email to yourself to test it all.  All the emails will be sent from your own @gmail.com account.

Just for fun, I include the script code here that is also in the repo:


function sendEmails() {
  //Set up some variables
  var startRow = 2; // First row of data to process
  var numRows = 100; // Number of rows to process
  var currentDate = new Date();
  var currentYear = currentDate.getFullYear();
  var currentMonth = currentDate.getMonth() + 1;
  var currentDay = currentDate.getDate();
  var emailSubjectPrefix = 'Reminder: ';
  var urlToGoogleSheet = 'https://docs.google.com/spreadsheets/????edit#gid=0';

  var sheet = SpreadsheetApp.getActiveSheet();
  // Fetch the range of cells A2:D100
  var dataRange = sheet.getRange(startRow, 1, numRows, 4);
  // Fetch values for each row in the Range.
  var data = dataRange.getValues();
  for (i in data) {
    var row = data[i]; //Get the whole row
    var emailAddress = row[0]; // First column of row
    if (emailAddress != "") //If there is an email address, do something
    {
      var eventDate = new Date(row[1]); //second column of row
      var yearOfEvent = eventDate.getFullYear();
      var monthOfEvent = eventDate.getMonth() + 1;
      var dayOfEvent = eventDate.getDate();
      if (currentYear >= yearOfEvent && (currentMonth > monthOfEvent || (currentMonth == monthOfEvent 
           && currentDay >= dayOfEvent)))
      {
        var subject = emailSubjectPrefix + row[2];  //third column of row
        var message = row[3]; // fourth column of row
        message = "\r\n\r\n" + message + "\r\n\r\n";
        //Add a link to the spreadsheet in the email so people 
        //can easily go disable the reminder 
        message = message + "\r\nSent on " + currentDate + 
        "\r\nDisable the notification by changing the date on it here: "
        + urlToGoogleSheet;
        message = message + "\r\nReminder Start Date: " + eventDate
        MailApp.sendEmail(emailAddress, subject, message);
      }
    }
  }
}

Posted in Code Design, CodeProject, Uncategorized | Leave a Comment »

Malware for Neural Networks: Let’s Get Hacking!

Posted by robkraft on March 24, 2017

I don’t intend to infect any artificial intelligence systems with malware. But I do intend to provide an overview of the techniques that can be used to damage the most popular AI in use today, neural networks.

With traditional hacking attempts, bad actors attempt to plant their own instructions, their own computer code, into an existing software environment to cause existing software to behave badly. But these techniques will not work on neural networks. Neural networks are nothing more than a big collection of numbers and mathematical algorithms that no human can understand well enough to alter in order to obtain a malicious desired outcome. Neural networks are trained, not programmed.

But I am not implying that damage cannot be done to neural networks, or that they can’t be corrupted for evil purposes. I am implying that the techniques for malware must be different.

I have identified five types of malware, or perhaps I should say five techniques, for damaging a neural network.

1. Transplant

The simplest technique for changing the behavior of an existing neural network is probably to transplant the existing neural network with a new one. The new, malicious, neural network presumably would be one that you have trained using the same inputs the old one expected, but the new one would produce different outcomes based on the same inputs. To successfully implement this malware, the hacker would first need to train the replacement neural network, and to do so the hacker needs to know the number of input nodes and the number of output nodes, and also the range of values for each input and the range of results of each output node. The replacement neural net would need to be trained to take the inputs and produce the outputs the hacker desires. The second major task would be to substitute the original neural network with the new neural network. Neural networks accessible to the Internet could be replaced once the hacker had infiltrated the servers and software of the existing neural network. It could be as simple as replacing a file, or it could require hacking a database and replacing values in different tables. This all depends on how the data for the neural network is stored, and that would be a fact the hacker would want to learn prior to even attempting to train a replacement neural network. Some neural networks are embedded in electronic components. A subset of these could be updated in a manner similar to updating firmware on a device, but other embedded neural networks may have no option for upgrades or alterations and the only recourse for the hacker may be to replace the hardware component with a similar hardware compare that has the malicious neural network embedded in it. Obviously there are cases where physical access to the device may be required in order to transplant a neural network.

2. Lobotomy

If a hacker desires to damage a neural network, but is unable or unwilling to train a replacement neural network, the hacker could choose the brute force technique called the lobotomy. As you might guess, when the hacker performs a lobotomy the hacker is randomly altering the weights and algorithms or the network in order to get it to misbehave. The hacker is unlikely to be able to choose a desired outcome or make the neural network respond to specific inputs with specific outputs, but the random alterations introduced by the hacker may lead the neural network to malfunction and produce undesirable outputs. If a hackers goal is to sow distrust in the user community of a specific neural network or of neural networks in general, this may be the best technique for doing so. If one lobotomy can lead a machine to choose a course of action that takes a human life, public sentiment against neural networks will grow. As with a transplant, the hacker also needs to gain access to the data of the existing neural network in order to alter that data.

3. Paraphasia

Of the five hacking techniques presented here I think that paraphasia is the most interesting because I believe it is the one a hacker is most likely to have success with. The term is borrowed from psychology to describe a human disorder that causes a person to say one word when they mean another. In an artificial neural network, paraphasia results when a saboteur maps the response from the neural network to incorrect meanings. Imagine that Tony Stark, aka Iron Man, creates a neural network that uses face recognition to identify each of the Avengers. When the neural network inputs send an image of Captain America through the neural network layers, the neural network recognizes him, and then assigns the label “Captain America” to the image. But a neural network with paraphasia, or I should say a neural network that has been infected with paraphasia, would see that image and assign the label of “Loki” to it. Technically speaking, paraphasia is probably not accomplished by manipulating the algorithms and weights of the neural networks. Rather, it is achieved by manipulating the labels assigned to the outputs. This makes it the most likely candidate for a successful neural network hacking attempt. If I can alter the software consuming the output of a neural network so that when it sees my face it doesn’t assign my name to it, but instead assigns “President of the United States” to it, I may be able to get into secret facilities that I would otherwise be restricted from.

Open and Closed Networks

The first three hacking techniques could be applied to neural networks that are open, or that are closed. A closed neural network is a network that no longer adjusts its weights and algorithms based on new inputs. Neural networks embedded in hardware will often be closed, but the designers of any neural network may choose to close the neural network if they feel it has been trained to an optimal state. An open neural network is a network that continues to adjust its weights and algorithms based on new inputs. This implies that the neural network is open to two additional forms of attack.

4. Brainwashing

Many neural networks we use today continue to evolve their learning algorithms in order to improve their responses. Many voice recognition systems attempt to understand the vocalizations of their primary users and adapt their responses to produce the desired outcomes. Some neural networks that buy and sell stocks alter their algorithms and weights with feedback from the results of those purchases and sales. Neural network designers often strive to create networks that can learn and improve without human intervention. Others attempt to crowdsource the training of their neural networks, and one example of this you may be familiar with is captcha responses that ask you to identify items in pictures. The captcha producer is probably not only using your response to confirm that you are a human, but also to train their neural network on image recognition. Now, imagine that you had a way to consistently lie to the captcha collection neural network. For dramatic effect, let’s pretend that the captcha engine showed you nine images of people and asked you to click on the image of the President of the United States. Then imagine that, as a hacker, you are able to pick the image of your own face millions of times instead of the face of the President. Eventually you may be able to deceive the neural network into believing that you are the President of the United States. Once you had completed this brainwashing of the neural network, you could go to the top secret area and the facial recognition software would let you in because it believed you to be the President. I am not saying that brainwashing would be easy. I think it would be really difficult. And I think it would only work in the case where you could programmatically feed a lot of inputs to the neural network and have some control over the identification of the correct response. For best results, a hacker might attempt to use this technique on a neural network that was not receiving updates through a network like the Internet, but was only receiving updates from a local source. A neural network running inside an automated car or manufacturing facility may operate with this design. Brainwashing is similar to paraphasia. The difference is that in brainwashing, you train the neural network to misidentify the output, but in paraphasia you take a trained neural network and map its output to an incorrect value.

5. OverStimulation

Like a lobotomy, the overstimulation technique only allows the hacker to cause mischief and cause the neural network to make incorrect choices. The hacker is very unlikely to achieve a specific desired outcome from the neural network. Overstimulation can only occur on poorly designed neural networks and essentially these are neural networks that are subject to the overfitting flaw of neural network design. A neural network that is not closed and designed with an inappropriate number of nodes or layers could be damaged by high volumes of inputs that were not examples from the original training set.

Layers of difficulty

To all you aspiring hackers, I also warn you that our neural networks are getting more complex and sophisticated every day and I think this makes it even more difficult to hack them describing the techniques mentioned here. The deep learning revolution has been successful in many cases because multiple neural networks work in sequence to produce a response. The first neural network in the sequence may just try to extract features from the incoming sources. The identified features are the output of the first network and these are passed into a second neural network for more grouping, classification, or identification. After that these results could be passed on to another neural network that makes responses based upon the outputs of the previous neural network. Therefore, any attempted hack upon the process needs to decide which of the neural networks within the sequence to damage.

I am not encouraging you to try to introduce malware into neural networks. I am strongly opposed to anyone attempting to do such things. But I believe it is important for system engineers to be aware of potential ways in which a neural network may be compromised, and raising that awareness is the only purpose of this article.

Posted in CodeProject, Security | Tagged: , , , | 1 Comment »

Robert’s Rules of Coders: #11 Separate User Interface Logic From Business Logic

Posted by robkraft on July 10, 2016

One goal to keep in mind as you write software is to write software that is easy to maintain and enhance in the future. We can do this by organizing code so that things that might change will be easier to change. Consider this example:

CodeArt12-1

CodeArt12-2

In the code above, User Interface (UI) code is mixed together with the business object code. We should try not to pass details about how the UI implements a feature unless the business object really needs to know. In this example, the Products business object really only needs to know three pieces of information from the UI:

  • The Price
  • Should a discount be calculated because this is for a nonprofit agency? (yes or no)
  • Should a discount be calculated because this is a bulk purchase? (yes or no)

If we change the code to pass boolean values to the Products business object instead of the checkboxes, we gain the following benefits:

  • The UI can easily be changed in the future to use something other than checkboxes, and this change will not require also changing code in the Products business object.
  • We increase our potential to use the Products business object with different types of user interfaces. This business object may currently expect a C# WPF checkbox control, which means the business object would not work if someone had a C# Windows checkbox control, or perhaps a C# Silverlight checkbox control. But if the Products business object accepted a boolean, which is a datatype common to more platforms, the business object will more likely work with different user interfaces.
  • Unit tests that we write won’t need to reference the specific UI components needed for building the user interface.

A more common way that developers often entwine UI code with business object code is shown below. This example is the opposite of the case above. Here logic that could, and should, reside in the business object is performed in the UI.

CodeArt12-3

CodeArt12-4

The reason we don’t like this code is that logic to calculate the discounted price should be moved from the UI to the Product business object. In doing so we would gain the following benefits:

  • We could reuse the Product business object in another program without needing to also add logic to the UI of the other program to calculate the discounted price.
  • If we need to change the calculation for the discounted price, we need to make the change in only one place and every program using that business object automatically is affected.
  • We can easily write a unit test on the Product business object to make sure that the code calculating our discounted price works correctly.

A better way to write the code from both examples above so that the UI and business logic is not entwined is shown below. I will admit that this is not the best example, because it does not use TryParse, nor does it have input checking and error handling, and it could use an interface, but those topics are not the point of this article, which is to encourage you to separate the UI logic from the business logic in your applications.

Codeart12-5

CodeArt12-6

It is not bad sometimes to write code that entangles UI code and business logic, knowing that you will refactor the code to move the logic to the correct place before you consider the feature complete. It is often helpful to have all of the code in one big method until you have it correct, then you can improve the code by refactoring it.

As with any of Robert’s Rules of Coding, you don’t need to adhere to them all of the time and there are cases where it is better not to. But most programmers should follow the rules most of the time. I hope you agree.

Go to Robert’s Rules of Coders for more.

 

 

Posted in Code Design, CodeProject, Coding, Uncategorized | 1 Comment »

Agile Baby Steps: A Parable To Help You Get Started

Posted by robkraft on March 20, 2016

We often hesitate to take the action that shows we are committed to doing something new. We read about it, analyze it, and try to understand it; but real learning requires that we go beyond reading. We must DO. The goal of this article is to get you to take action toward becoming Agile, without understanding or adopting all of the habits of an Agile development team. I am asking you to try out some new processes in your software development life cycle, without considering whether or not you are doing Agile development.

Side bar: Your ability to implement an Agile technique depends upon the process by which your software is implemented. An Agile development technique that works for one process may not work for another, so be cautious of Agile recommendations that state you must do something specific or you will fail at being Agile. Your goal is not to be “Agile” by anyone’s definition. Your goal is to write better software. 

  • Some software developers write code then send it to a quality assurance environment who then push it into production;
  • Some software developers write code for embedded systems where all the software must be completed before it is written to a chip;
  • Some software developers check in code that that runs through automated tests and gets published to a public web site without further human action;
  • And some software developers follow other models for implementation.

The process by which you take code from development into its final environment greatly affects which agile techniques will work for you.

The Parable Begins

Let me share with you a parable of two teams, each tasked with developing the same software, but each using a different methodology.

Team Agile used an agile methodology, and team waterfall used a waterfall method. Both teams were asked to build a web application to allow users to manage data in an existing client-server application.

Team Agile met with the end users, usually known as the product owners in agile, and learned the high level requirements and features desired for the entire application. Because they did not gather detailed requirements, they spent only eight hours on this task.

Side bar: Delaying the gathering of detailed requirements often adds value in several ways:

  1. You don’t spend time gathering and documenting detail requirements for features that later get cancelled or excluded from the project.  If the team spends ten hours detailing the requirements of a feature that management decides not to implement a month later, then the team wasted ten hours. Eliminating waste is the major focus of Lean and Kanban styles of Agile development.

  2. Another reason to delay gathering detailed requirements is that every team member will be smarter in the future than they are today.  Each will know more about the application and may learn that ideas considered early in development are not as effective as new ideas learned since. Perhaps a developer read about a new programming technique or tool; or perhaps the product owner learned about a better way to design a user interface. You can implement these new ideas and techniques even if you documented detail requirements for the old techniques, but that means the time spent gathering and documenting requirements for doing it the old way was wasted time.

Prioritization

Team Agile also asked the product owners which features were most important. The product owners initially said all of the features were necessary and important, but after more discussion the product owners provided this prioritized list of features:

  1. Users need to be able to log in
  2. To view data
  3. To edit data
  4. To add data
  5. To delete data

Prioritizing features is an important, and necessary feature of agile development, as we shall see later. If you do not prioritize the features you are going to work on, you will probably not receive the benefits that agile development can provide.

Meanwhile, in a parallel universe, the Team Waterfall also met with the end users to gather requirements. They spent much more than eight hours on this task because they needed detailed requirements for all the features. They planned for little interaction with the product owner after this meeting until the product was finished. Team Waterfall spent sixty-four hours on requirements.

The Login Feature – Deploy Early

Team Agile next then did some design for the project. They thought about all of the requirements they had learned about, but they only did a detailed design for the first feature they worked on; and that feature was the ability to log in. They spent four hours on the design of this feature.

Then Team Agile coded the login feature. The coding took eight hours. Next, Team Agile turned the application over to the Quality Assurance (QA) team. Even though the entire application was not completed the QA team found a few problems with the login feature. Team Agile fixed those problems, and the QA team could find no more defects.

Side bar: Agile development does not magically prevent programmers from creating bugs, but it does make developers aware of the bugs sooner, so they can fix them while the code is still fresh in their minds and before some errors might get propagated into more of the code.

Team Agile implemented the application in production. Now, this seemed a little silly to some people, because the application did not do anything other than let a user login; but it turned out to be very valuable. They discovered that the software did not work in production. The production environment had an older version of the web server that lacked some features the application depended on. Team Agile met with IT to discuss the problem and decide if the web app should be re-written, or if the production web server could be upgraded to a newer version. They decided the web server could be upgraded, but it did require two weeks for this to be completed.

Agile Manifesto Principle #1: -“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”

Side bar: Not all software can be deployed in small pieces, such as software embedded on chips or shrink-wrapped software. But some software, like the software in this parable, can be deployed in pieces. By taking the software as far as possible along the path of implementation you may discover problems. It is always best to know about problems sooner so they can be accounted for in the project schedule and possibly used to correct the product requirements, design, and code. A product developed using a waterfall method has a higher risk of failing to discover some problems until all the code is completed and thus incurs significantly higher costs to correct the problem.

Side bar: Agile methodologies reduce the risk of unknown and unexpected problems by revealing them sooner.

The View Feature

While Team Agile waited for the new web server to be implemented in production, they proceeded to work on the second feature, “Allow users to view data”.   They met with the users to get more detailed requirements about how they would like to view the data. They spent eight hours on this task. Team Agile then created a design, including some mockups and reviewed the mockups with the product owners. After this sixteen hours of work the developers were ready to begin coding.

I have not forgotten about Team Waterfall. During all this time that Team Agile did the activities above, Team Waterfall has been gathering requirements. Team Waterfall is now ready to design the application, and they will spend about forty four hours in design, which is a little less than the total amount of time Team Agile will spend on design. In this parable, Team Waterfall benefitted by designing the entire application all at once because it was all fresh in their minds as they did it. Team Agile, on the other hand, did parts of the design spread out of several months, and had to spend part of that time recalling why some decisions were made. However, the advantage still goes to Team Agile, as we shall see, because Team Waterfall will discover that much of their time spent in design was wasted.

Team Waterfall completed their design then started coding. They chose to code the view and edit features first, and at the same time because they believed them to be the most interesting and fun part of the code to write. For both Team Agile and Team Waterfall, the coding phase(s) of the application take the longest; around three hundred hours. At the same time that Team Waterfall is working on the total application design, Team Agile begins coding their second feature, “Viewing Data”.

Communication With Product Owner

For both teams, the time spent coding is the same for all features except for “Viewing Data”. Team Agile spent one hundred and twenty hours coding this feature, but Team Waterfall is going to spend one hundred and sixty hours coding this same feature for the following reasons.

  • In the first week, a developer attempted to implement a list box on a form as had been requested in the requirements. But the developer found that this data would be difficult to display given the list box features. He realized he could easily do this with a grid though.  So the developer brought this up with the product owner during the daily status meeting, and the product owner said he didn’t care if it was a list box or grid, he barely understands the difference between the two, and he would just prefer to defer that decision to the developer. So the developer used a grid instead of a list box and saved an estimated forty hours of work that would have been needed if he had tried to make the feature work using a list box.

Agile Manifesto Principle #4 – “Business people and developers must work together daily throughout the project.”

  • In the second week, another developer was working on a feature to let users pick their own colors for the forms. The requirement called for a text box in which the user could type a hexadecimal value representing the color, but the developer had recently learned about a component that could just as easily provide a color picker to make it much easier for the end user. Instead of adhering to the requirements the developer showed the product owner liaison an example of the color picker and asked if this change would be acceptable and the product owner liaison loved the idea, so it was implemented.

Agile Manifesto Principle #6 – “The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.”

Side bar: Once again, the ability for frequent interaction between the developers and the end users throughout development facilitates many improvements. Also, developers are often more aware of the capabilities of technology than the end users and can make suggestions for improving the application based on that knowledge. When the discussion for detailed requirements can be delayed and the developer writing the code can be involved, there is a greater chance for a better solution.

Side bar: A good technique for software development, and for many decisions in life, is that it is best to commit to a decision as late as possible because your knowledge later in the life cycle is greater than it will be earlier in the cycle.

Reports Feature – New Requirements

During Team Agile’s development of the “View Data” feature, the product owner realized they had omitted the reports feature from the project. The reports are used by every user and are much more important than the ability to delete, add, or even edit data. The product owner and the developers had a meeting about the omission and decided that the developers would add the report feature next, after they finished View feature.

Agile Manifesto Principle #2 – “Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.” The ability to accept new requirements and to change the priorities of features developed is one of the most noticeable and valuable aspects of agile development.

The development team finished the view feature and easily deployed it into production. The product owners started using the application, even though all of the features were not available.

Agile Manifesto Principle #3 – “Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. “

Deploying the view feature provided several benefits:

  • The company could begin deriving value from the application. In financial terms, the Return On Investment (ROI) starts occurring sooner in Agile projects than in Waterfall projects.
  • The users became more productive because the web application was easier and faster to use than the client server application. It was also easier for the IT staff to make it available to more users.
  • The users found bugs in the application. Finding some of these bugs may prevent similar bugs from being developed in the remainder of the application. For example:
    • The users found that there were no accessibility keys. So the development team planned to add these to the view screens, and were proactive about adding this feature in future development.
    • The users became more productive because the web application was easier and faster to use than the client server application. It was also easier for the IT staff to make it available to more users.
    • Twenty percent of the users found that some features did not work on the particular browser they used, which was different than the browser used by the developers and most of QA.
    • A few bugs were found causing incorrect data to be displayed.
  • Side bar: Teams often desire to fix bugs right away, but in an Agile environment, especially one with short one-week or two-week iterations, this can be counterproductive. It is generally best to let the team complete what they started in an iteration, then make the fixes to the bugs a top priority for the next iteration.

Waterfall Team Progress

Let’s check in on Team Waterfall. At the same time Team Agile is coding and deploying the “View Data” feature, Team Waterfall chose to code the “View Data” and “Edit Data” features, and they are still in the process of doing this. They have nothing yet to show to the product owners, so let’s turn back to Team Agile, because they have some visible progress that we can check on.

Agile Manifesto Principle #7 – “Working software is the primary measure of progress.”

Team Agile finished the “View Data” feature, and started to develop the “Reports” feature next. The requirements and design only took sixteen hours, and by this time the reports of bugs in the “View Data” feature were coming in. However, the team felt they could and should complete the “Reports” feature before working on the bugs so that they did not incur the cost of switching back and forth between tasks. The product owners accepted this decision because the development iterations were short, and the development team said they could start fixing the bugs next week.

After finishing the “Reports” feature and deploying it to production, the team spent a week fixing the bugs in the “View Data” feature. Some of the bugs had made some views unusable, and other bugs, such as accessibility and support for other browsers, would affect how they developed all subsequent features.

Side bar: Agile development does not magically eliminate bugs, nor does it prevent errors in requirements, design, and communication. But Agile development does reveal most bugs sooner so they can be fixed more quickly and cost less to correct than they would in waterfall development.

Team Waterfall is still coding the “Edit Data” feature at this point in time. It took them longer to code the “View Data” feature than it took Team Agile because Team Waterfall made the list box work as documented in the requirements rather than go back and talk to the product owners about using a grid instead. Team Waterfall also spent time explaining to the product owner that they could not add the “Reports” feature to the product because they had already gathered all of the requirements and done the design and they would need to redo some of that for a new feature. Ultimately, the product owner agreed to increase the product budget and provide proper paperwork for a “Change Request” to the requirements of the system. Team Waterfall then spent eighteen hours gathering the requirements and changing their design, which included changing the design of some database tables they had not started coding against yet. Since Team Waterfall still has nothing to show, we will go back and see what is happening with Team Agile.

More New Feature Requests

Team Agile has started the requirements and design of the “Edit” feature. During development of the edit feature, the product owner realized they had omitted filtering and sorting abilities in the view feature and that filtering and sorting was really necessary to make the views more valuable. Team Agile decided they would add sorting and filtering to views right after they completed the editing feature.

The Cancellation of the Project

But in the next week a new project came in and the developers were asked to suspend this project and work on the new project. The new project was very important, of course, and would probably take the team a year to complete. Team Agile was given one week to wrap up this project and begin work on the new project. For Team Agile, the editing feature was almost done, but the date and time pickers only worked on one browser and the developers estimated it would take them three to five days to get new date and time pickers working on all browsers. Team Agile had to choose between these options:

  1. Add filtering and sorting to views and not release the edit feature
  2. Finish the edit feature so the date picker worked on all browsers and users would not have to type in dates, but not add filtering and sorting to views
  3. Add filtering and sorting to views and release the edit feature without a date picker, requiring users to type in the date.

Team Agile desired to complete the edit feature by making the date picker work because they did not want to provide the end users with a subpar, low quality product; and they thought it would make the developers look bad if the app did not have the simple date pickers. But the product owner said that the filtering and sorting of views was most valuable, and that they would take the editing feature even without the date pickers because the ability to edit data from the web application would provide some value to users even if the feature was not finished perfectly.

Side bar: Agile development often gives the product owner insight into the product development processes and decisions. This is almost always a benefit to the business because the product owner can help guide the product outcome to a solution that provides the best ROI for the business. However it can, occasionally, upset some developers when they feel they are asked to cut quality to get the job done. The developers may feel it will reflect poorly on them. It is up to the management teams to convey to the end user the decisions made in this situation were those of management, and not the developers.       Waterfall teams rarely have this dilemma because the product owner is unaware of all the decisions made.

Speaking of waterfall teams, as in the side note, what is going on with Team Waterfall now that this new project has arrived and they must work on the new project instead. Well, one benefit for Team Waterfall is that they can start on the new project right away instead of spending a week trying to wrap up the old project because there is no way Team Waterfall can deliver anything within one week on the old project; they never even started the Login feature of the application. The obvious enormous downside for Team Waterfall is that they will deliver nothing to the end users, and all the time spent on the application can now be considered waste. That is not the case for Team Agile. Even though the project was terminated early, the agile team delivered something of value that could be further enhanced in the future.

Agile Manifesto Principle #10 – “Simplicity–the art of maximizing the amount of work not done–is essential.”

I provide two summaries to this parable. The first, is a summary specific to the tale, and the second is a summary of general conclusions to be made about agile development.

Specific Summary

  • Team Agile delivered some business value, but all the time spent by Team Waterfall was a waste.
  • Team Agile reduced the development time of some features by frequent interaction with end users and by being open to changing the requirements.
  • Team Agile provided a better way for end users to choose their colors than team waterfall because the UI decision was not made until the feature was developed and in that time the developer had learned of a new component.
  • Team Agile accommodated the “Report” feature because they had a prioritized backlog and could easily queue it up to work on next. Team Waterfall did not prioritize their work, so any new development would probably just be added at the end. Team Waterfall would need to alter their existing requirements and design.
  • Team Waterfall never learned that their app would not work in production due to the older web server. It is probable that the team would be rushing to deliver this product by a deadline, only to discover right at the end that additional time would be required. It could have been even worse if IT was unable to upgrade the web server and the development team had to go back and change code to make the application work on an older web server.

General Summary

  • Agile teams often waste less time than waterfall teams.
  • Frequent interaction with end users can produce a better product with less waste. This is not exclusive to agile development, but it is more common to agile development than to waterfall.
  • The willingness to accept flexible requirements can produce a better product with less waste. This is more difficult to do when all requirements have been gathered up front and have been included in a design.
  • Delaying requirement and design details can lead to better decisions at the time the decision needs to be made.
  • Agile teams accept new requests easily by adding them in the backlog. They do not have a lot of time invested in any features in the backlog because they wait and do the detailed requirements and design for them when they are about to code them.

If you want to become more Agile today:

  • Create a prioritized backlog
  • Select features from the backlog that you will complete during your next iteration. A good iteration length is two weeks.
  • Make sure that you don’t just code the features, but that you include testing and deployment, if possible, to be done within your iteration.
  • Do not work on several things at the same time. Complete each feature as much as possible.
  • Finish what you start each iteration. Do not add interrupt what you started in an iteration by working on something new that came in to the backlog. Wait until the next iteration to start it.
    • Sometimes, something very high priority will come in that must be completed right away. Agile developers understand and accept this.

Posted in CodeProject, Process, Project Management, Uncategorized | Leave a Comment »

Robert’s Rules of Coders #9: Eliminate Deep Nesting by Using Switch Statements and Functions

Posted by robkraft on February 14, 2016

The ‘If’ statement is one of the fundamental coding constructs of almost every programming language. Along with the ‘If’ statement, most languages also support ‘else’ conditions and the ability to nest ‘If’ statements. But this simple construct can also become one of the biggest contributors to code that is difficult to understand and modify. This often happens when ‘If’ statements get nested within ‘If’ statements; but there are two simple techniques you can use to reduce this complexity, ‘Switch’ statements and functions.

Switch statements offer these benefits to most developers:

  • They are easier to read, and thus
  • They are easier to understand, and thus
  • They are easier to maintain.
  • They are also easier to debug
  • In many languages, they also can be compiled to execute a little more swiftly that nested ‘If’ statements

Here is an example of an ‘If’ statement than can be improved by converting it to a ‘Switch’ statement:

If aValue = 6 then

Stars = stars + 1

Else

if aValue = 7

Stars = stars + 3

Else

if aValue = 8

Stars = stars + 5

Else

if aValue = 9

Stars = stars + 9

End if

End if

End if

End if

Here is the same logic from above, using a ‘Switch’ statement:

Switch aValue

Case 6:

Stars = Stars + 1

Case 7:

Stars = Stars + 3

Case 8:

Stars = Stars + 5

Case 9:

Stars = Stars + 9

I suspect that you will agree that it is easier to understand the code in the switch statement than the code in the nested ‘If’s. Another technique to eliminate nested ‘If’ statements is to move some of the code into separate functions. Although the hierarchy of ‘If’ statements may remain the same from the computer’s point of view, to most humans it becomes much easier to manage.

If input data is valid

If filename is valid

Create File

If file was created

Log “Success”

Return “Success”

Else

If error due to size

Log “Failure”

Return “Could not create file because it is too large.”

If error due to permission

Log “Failure”

Return “Could not create file because you do not have permissions.”

Else

Log “Failure”

Return “Unable to create the file. Reason unknown.”

End if

End if

Else

Log “Failure”

Return “Your file name is invalid.”

End if

Else

Log “Failure”

Return “The file input is invalid.”

End if

Here is the same logic from above, using functions:

String response = “”

Response = IsInputValid(myinput)

If (response= “”)

Return response

Response = IsFileNameValid(myfile)

If (response= “”)

Return response

return FileCreationResultMessage(myfile, myinput)

The functions called from the code above:

Function string IsInputValid(string input)

If input is not valid

Log “Failure”

Return “The file input is invalid.”

Else

Return “”

End if

End Function

Function string IsFileNameValid(string input)

If input is not valid

Log “Failure”

Return “Your file name is invalid.”

Else

Return “”

End if

End Function

Function string FileCreationResultMessage(string file, string input)

Create File

If file was created

Log “Success”

Return “Success”

Else

If error due to size

Log “Failure”

Return “Could not create file because it is too large.”

If error due to permission

Log “Failure”

Return “Could not create file because you do not have permissions.”

Else

Log “Failure”

Return “Unable to create the file. Reason unknown.”

End if

End if

End Function

As with any of Robert’s Rules of Coding, you don’t need to adhere to them all of the time and there are cases where it is better not to. But most programmers should follow the rules most of the time. I hope you agree.

Go to Robert’s Rules of Coders for more.

Posted in Code Design, CodeProject, Coding, Robert's Rules of Coders | 6 Comments »