Rob Kraft's Software Development Blog

Software Development Insights

Microsoft SQL Server Violates the Commutative Law of Basic Math

Posted by robkraft on April 16, 2015

I love SQL Server.  I’ve been a SQL Server expert for decades, but just a few days ago I found a bug in SQL Server when performing simple math calculations.

These two queries yield different results:

  • select 1.0 * 3/2
  • select 3/2 * 1.0

The first query returns 1.5, and the second query returns 1.0.  Different results violates the Commutative rule of math that tells us the order of operands should not matter.

I understand why this is happening.  SQL Server looks at the data type of the first argument and rounds the results of calculations to that data type.  So in the first query, 1.0 is a float (aka double); but in the second query 3 is an int (aka integer) with no decimal places so 3/2 rounded to fit in an integer is 1, not 1.5.  But just because we can explain it does not mean that it is acceptable.

This problem affects real code.  The problem occurs not just with values, but when doing similar calculations with columns in tables.

  • create table CommTest (int1 int, int2 int, dbl1 float)
    insert into CommTest values(3,2,1.0)
    select dbl1 * int1/int2 from CommTest
    select int1/int2 * dbl1 from CommTest

The first select in this query will return 1.5, and the second select will return 1.

The fix is to cast the first variable as a float “select cast(int1 as float)/int2 * dbl1…”, but that is challenging to remember and difficult to apply if you have code generating SQL for you.  The better solution is probably to do all math in the program calling the database; at least if SQL Server is your DBMS.  Though this is not easy when you just have a reporting tool like SSRS for your front end.

Until Microsoft fixes this, which I doubt they ever will, developers just need to be extra careful.

This problem does not exist in other database management systems.  At least I know it does not exist in Microsoft Access nor Oracle.  For reference, I tested this on SQL Server 2014.

Posted in SQL Server | Tagged: | Leave a Comment »

One Cause of ORA-00600 with qctstc2o1

Posted by robkraft on February 24, 2015

Today one of my queries failed on Oracle because the query had too many columns.  The query is very simple, but returns 1023 columns of output.  The query is simply

SELECT * FROM TABLE1

INNER JOIN TABLE2 ON TABLE1.COL1 = TABLE2.COL1

INNER JOIN TABLE3 ON TABLE1.COL1 = TABLE3.COL1

INNER JOIN TABLE4 ON TABLE1.COL1 = TABLE4.COL1

If I remove any of the joins from the query it will succeed.  If I return fewer than 1000 columns it will succeed.  But if I run it as is I get this error:

Error report:

SQL Error: ORA-00600: internal error code, arguments: [qctstc2o1], [1], [0], [0], [1], [0], [0], []

00600. 00000 –  “internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]”

*Cause:    This is the generic internal error number for Oracle program exceptions. This indicates that a process has encountered an exceptional condition.

*Action:   Report as a bug – the first argument is the internal error number

Posted in Coding | Tagged: | Leave a Comment »

A Simple Build Summary Email For Your Development Environment. Tests Included!

Posted by robkraft on January 13, 2015

We cobbled together a simple system to provide us a single email each day to tell us if all of our jobs and unit tests were successful on their overnight runs.

An example of our daily build summary email.

An example of our daily build summary email.

 

Given that we have a dozen different jobs that run tests and a dozen that run builds it was helpful to have the results of all the jobs in a single email. To implement this similar to what we did you need the following:

  • A database for storing the results of the jobs. We used SQL Server.
  • A program to write the build results to the database. The C# program we wrote is provided here.
  • A few modifications to your build scripts. Our changes to our Nant builds are shown here.
  • Optionally a testing framework that produces output you can read easily.  We use nunit.
  • A report sent via email. We used an SSRS report to send the email summary each morning.

 

Here is how we did it:

1) We created a table in SQL Server to store the results of each job.

SQL for Creating Table ZBuildJobs

SQL for Creating Table ZBuildJobs

Here is a sample of the table with data:

Sample Of Table ZBuildJobs with data

Sample Of Table ZBuildJobs with data

 

2) We created a program that we could call from a command prompt and pass a few parameters to write into the table.   This is our C# program.  It is comprised of just 3 methods.

The App entry code reads the command line arguments.  If there are more than 2 (> 3 in code) then we also look for a number of tests passed and failed to be passed in.

Code Part 1

Code Part 1

The CallDatabase method simply writes the values passed in to the database table.

Code Part 2

Code Part 2

The ProcessTestResults method uses some example code from the Microsoft URL shown to read the results of the nunit test file and count the number of passed and failed tests.

Code Part 3

Code Part 3

 

3) We modified our nant build files to call the program written in step 2 and pass in a value of success or failure.  For nunit test jobs we also pass in the XML results file created by our nunit tests.

  • Nant includes some built-in properties that we can use to specify a target to call when the nant job succeeds or fails.  So we configured those properties like this:
    • <property name=”nant.onsuccess” value=”BuildSuccess” />
    • <property name=”nant.onfailure” value=”BuildFailure” />
  • For ease of maintenance, we created a property to hold the name of our program from step 2 above that does the logging:
    • <property name=”param.EventLogPgmName” value=”C:\Dev\BuildLogger\BuildLogger.exe” />
  • To include spaces in our job names we need to surround the parameters we pass to BuildLogger with double quotes, and to include double quotes as part of the parameters we needed to create a property to hold the double quotes.  We also have a property to hold our current build version number:
    • <property name=”dq” value='”‘/>
    • <property name=”project.version.full” value=”15.0″/>
  • Then at the start of each build target we add a name for that build, something like this:
    • <property name=”param.BuildNameForLog” value=”${dq}${project.version.full} C# .Net Build${dq}” />
  • If the job runs unit tests, we include a property to specify the name of the unit test output file:
    • <property name=”param.TestResultsFile” value=”${dq}c:\program files (x86)\NUnit\bin\TestResult.xml${dq}” />
  • The final change to our build files is to declare the targets for BuildSuccess and BuildFailure.  These look like this:
    • <target name=”BuildSuccess”>
      • <exec program=”${param.EventLogPgmName}” commandline=” ${param.BuildNameForLog} Succeeded ${param.TestResultsFile}” failonerror=”true”/>
    • </target>
    • <target name=”BuildFailure”>
      • <exec program=”${param.EventLogPgmName}” commandline=” ${param.BuildNameForLog} Failed” failonerror=”true”/>
    • </target>

4) The last step is to build the report.  If you are already familiar with SSRS you will probably find this step to be very simple.  SSRS is easy to use and is free if you have a Standard Edition of SQL Server.  Here I share the more advanced features for the report.

This is the SQL to get the most results of the most recent execution for each job:

SQL for Report

SQL for Report

For each job listed on the report we want to show the record in green (Lime) if it passes and red (Tomato) if it fails.  So I modified the BackgroundColor property of each row and gave it this expression:

=IIF(Fields!Result.Value = “Succeeded”, “Lime”, “Tomato”)

For the two columns with the number of tests that passed or failed, I had to test if the field had null first, then apply the color:

=IIF( IsNothing(Fields!TestsFailed.Value), IIF(Fields!Result.Value = “Succeeded”, “Lime”, “Tomato”), IIF(Fields!TestsFailed.Value > 0, “Tomato”, “Lime”))

We have been very pleased with the results of our Build Summary process and continue to add more things to it.  It was easy to implement and has ran very reliably.

 

You can download the code from here: http://www.kraftsoftware.com/BuildYourOwnSummaryEmailCode.zip

Posted in CodeProject, Coding, Dev Environment, Process | Leave a Comment »

How to move data from a DBISAM database into SQL Server

Posted by robkraft on December 15, 2014

In December 2014 I acquired a task to retrieve all the data from an application storing the data in a DBMS format known as DBISAM.  DBISAM stores data for each table in separate files.  I discovered that each table consisted of either two or three files, a .dat file with the data and a .idx file for indexes and optionally a .blb for blog storage of images or large text blocks.

The first, and just about only, place you can go on the Internet for things related to DBISAM is www.elevatesoft.com.  You can download some free tools to assist with this task, and there are occasionally used forums where you can post questions.  The staff responded quickly to my one post.  From the downloads section you can download a package of tools called DBISAM-ADD.  Get the most current version.  The robust installer will place some programs on your computer (I was running Windows 8.1 and the tools worked fine).

The helpful tool is called “Database System Utility”, which is a program named dbsys.exe installed to C:\Program Files (x86)\DBISAM Additional\Version 4\utils\dbsys\win32.  With this program you can select any table to open and run queries against it, including data modifications.  You can also use the Export option to export the data to a .csv file.  However, blob data, including Memo fields will not be included in the exported .csv file.  So I do not recommend this option.

DBISAM Export

Getting the Data from Memo Fields Requires ODBC

ElevateSoft provides an ODBC driver on the same download page for the other DBISAM tools.  After running the installer for the ODBC driver, you need to open “ODBC Data Source Administrator (32-bit)” and add a “System DSN”.  I think you can use all the defaults if you are connecting to DBISAM files on your local computer.   My DBISAM files all have a password to protect them, so I needed to enter that password on the last tab/prompt of the ODBC configure wizard.  Then I was ready to use that ODBC driver with SSIS.

SSIS requires a standard edition or higher of SQL Server.  You cannot use it with SQL Server Express Edition.  Although I have installed a standard edition of SQL Server 2014, the most current SSIS tools are for SQL Server 2012.  And even though they use Visual Studio, I don’t think you can open Visual Studio directly, or at least that has not worked for me.  To get into SSIS, I open “SQL Server Data Tools for Visual Studio 2012″.

The secret to importing data using ODBC from DBISAM is to use the SSIS Import and Export Wizard.  I could not figure out how to do this import without the wizard.  Only the wizard allowed me to pick the ODBC connection I had created to the DBISAM Files.  See the picture below for an example of the Source configuration for the wizard.

Configure Data Source for DBISAM

For the destination you can pick your SQL Server database, then you should be prompted with a list of tables to include.  I had over 300 tables to import so I ran the wizard about 10 times and broke the 300 tables into smaller groups so that I could rerun the smaller chunks later.

The wizard does seem to have a problem with date and time fields coming from DBISAM and it will set their datatype to a number for mapping as in the screen shot below.  Simply change that to a datetime, or smalldatetime if you prefer, and you should be ready to go.

Edit Mappings for DBISAM

If you plan to run this multiple times, which you probably want to write a script that runs before your packages to delete the tables you created if they currently exist.  I recommend using the “Execute SQL Task” on the “Control Flow” tab of your package and having it run your “Data Flow Tasks” upon completion.

I also had some imports fail because the dates entered by the users into the DBISAM based system were outside the range of valid dates that SQL Server allows.  I found data with dates like ’12/31/0210′ instead of ’12/31/2010′.  I used the tools in the DBISAM utility to correct that data as shown here:

Run A Data Modification Query in DBISAM

Select,   “New Query” from the File menu and you can type in a query.  Dates must be surrounded by single quotes and be in yyyy-mm-dd format.  After correcting the data I was able to successfully import all the data.

Even the blob data such as images came across successfully into SQL Server blob datatypes.

Send me a message if you have any trouble with this.

Posted in Coding, SQL Server | Tagged: | 4 Comments »

How to Get the Simplest NG-Include from Angular to Work

Posted by robkraft on November 17, 2014

From the other examples I found online I thought it would be easy to start using ng-include in angularJS to pull in bits of HTML from other files, but after hours of struggle spread out over several days I found it was not.

Let me correct that.  It IS easy to do, you just have to be aware of some very important details.

  1. If you are attempting to do this on your local file system it will probably fail.  The example I have below is syntactically correct, but as of this writing in November of 2014 it only works on Firefox.  If you are using Chrome or IE you will only see the value of 3 in your output, not the contents of the included file.  I put the {{ 1 + 2 }} in the html to show that Angular was working, even when the ng-include is not.
  2. If you move your files into a hosted web server and access them through the web server they probably will work.
    1. But if you use IE and your browser uses compatibility mode to view the web server you are hosting the files on, it still does not work.  I think this is because Angular doesn’t support pre-IE8 browsers and compatibility mode is essentially rendering as IE7.

Here is the working code from my two files:

I name this file mypage.html

<!doctype html>
<html>
  <head>
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.2/angular.min.js"></script>
  </head>
  <body ng-app>
 {{ 1 + 2 }}
    <div ng-include="'myinclude.html'"></div>
  </body>
</html>

Please be aware that you should surround your include name with single quotes which are encased in double quotes.

I name this file myinclude.html

My include is Showing!!!

Posted in Coding | Tagged: | Leave a Comment »

The AntiVirus Software from AVG is Itself a Virus

Posted by robkraft on November 8, 2014

I’ve been personally trying different anti-virus packages for Windows 8.1 and I must report deep disappointment with the free version of AVG anti-virus.  I have two complaints:

  1. When I apply an upgrade of the software it changes my browser homepage to their own URL.  I did not ask it to do this and did not even see any option where I could prevent it.  But the default behavior of an upgrade should not be to make changes that I don’t want to my computer.  That is the definition of a virus, software that changes my computers in ways I don’t like to benefit the writers of the software.
  2. The second problem I had was that my settings in the browser telling Internet Explorer 11 what to do when I open a new tab would not stick.  Every time I rebooted the computer they changed back to the default settings, a blank tab.  It took me a while to track down the cause and the cause was an addin that AVG Antivirus installed in my browser.  I don’t mind the addin, but I do mind that it kept changing my IE settings.

I guess it is time to try a different package.  Microsoft Essentials does not do anything unexpected and can be trusted, but it also has the reputation as being the poorest at catching viruses.

Posted in Home Tech, I.T. | Leave a Comment »

The Correct Way to Write A SQL Server Trigger That Modifies the Record Just Inserted

Posted by robkraft on October 31, 2014

It is easy to write a trigger that has unintended consequences, which is why I recommend you always test your triggers for two things:

  1. Did the trigger update the records I expected it to correctly?
  2. Did the trigger update records I did not expect it to?

Here is an example of a trigger we had that was incorrect.  We discovered it because it was taking so long to run the insert query that the query was timing out.  And it was taking so long because it was updating all the records in the table, not just the one we wanted it to modify.

In this example, we want to update the OrderDetails date to be equal to the OrderDate from the parent table, but only for orders with type=200 and only when new order details are inserted.  Run this SQL to create the tables and add some test data.

drop Table Orders
go
drop table Orderdetails
go
Create Table Orders (OrderID int, OrderDate datetime, OrderType int)
Create Table OrderDetails (DetailID int, parentOrderID int, DetailDate datetime)
insert into orders values (1,’10/25/2014′,100)
insert into orders values (2,’10/27/2014′,200)
insert into orders values (3,’10/28/2014′,100)
insert into orders values (4,’10/29/2014′,200)
insert into OrderDetails values (10, 1, ’10/25/2015′)
insert into OrderDetails values (11, 1, ’10/26/2015′)
insert into OrderDetails values (14, 2, ’10/25/2015′)
insert into OrderDetails values (15, 2, ’10/26/2015′)
insert into OrderDetails values (16, 3, ’10/25/2015′)
insert into OrderDetails values (17, 3, ’10/26/2015′)
insert into OrderDetails values (18, 4, ’10/25/2015′)
insert into OrderDetails values (19, 4, ’10/26/2015′)

Then run this script to create the incorrect stored procedure:

GO
CREATE TRIGGER dbo.MyFirstTriggerThatDoesntWork
ON OrderDetails
FOR INSERT AS
BEGIN
UPDATE OrderDetails
SET DetailDate=Orders.OrderDate
FROM Orders INNER JOIN
INSERTED I ON Orders.OrderID = I.parentOrderID
WHERE (Orders.OrderType = 200)
END

Now let’s run an INSERT and check the results:

insert into OrderDetails (DetailID, parentOrderID) values (20, 4)

The messages displayed look like the following, and that message with “9 row(s) affected” is a good indicator that something went wrong:

(1 row(s) affected)
(1 row(s) affected)
(9 row(s) affected)
(1 row(s) affected)

Looking at all of our data, we can see that every record got updated to the 10/29 date:

select * from OrderDetails
10 1 2014-10-29 00:00:00.000
11 1 2014-10-29 00:00:00.000
14 2 2014-10-29 00:00:00.000
15 2 2014-10-29 00:00:00.000
16 3 2014-10-29 00:00:00.000
17 3 2014-10-29 00:00:00.000
18 4 2014-10-29 00:00:00.000
19 4 2014-10-29 00:00:00.000
20 4 2014-10-29 00:00:00.000

That is not what we wanted.  Here is the correct way to write this trigger. Do this after running the script above to drop and recreate the tables.

CREATE TRIGGER dbo.MySecondTriggerThatDoesWork
ON OrderDetails
FOR INSERT AS
BEGIN
UPDATE OrderDetails
SET DetailDate=Orders.OrderDate
FROM Orders INNER JOIN
INSERTED I ON Orders.OrderID = I.parentOrderID INNER JOIN OrderDetails d on d.DetailID = I.DetailID
WHERE (Orders.OrderType = 200)
END

If we run the insert again and look at the output messages we will see that only 1 record was affected and we can see the correct output in our results:

insert into OrderDetails (DetailID, parentOrderID) values (20, 4)
(1 row(s) affected)
(1 row(s) affected)
(1 row(s) affected)
(1 row(s) affected)
select * from OrderDetails
10 1 2015-10-25 00:00:00.000
11 1 2015-10-26 00:00:00.000
14 2 2015-10-25 00:00:00.000
15 2 2015-10-26 00:00:00.000
16 3 2015-10-25 00:00:00.000
17 3 2015-10-26 00:00:00.000
18 4 2015-10-25 00:00:00.000
19 4 2015-10-26 00:00:00.000
20 4 2014-10-29 00:00:00.000

I emboldened the line of code in the trigger that made the difference.  In the first trigger, we are updating the OrderDetails table with the result of a join between the Inserted table and the Orders table.  So basically, that says “run a query that joins the Orders table with the Inserted table, and apply the results of that join to EVERY record in OrderDetails”.  It is applied to EVERY record in OrderDetails because we did not apply any criteria to limit which records on OrderDetails are affected.

UPDATE OrderDetails
SET DetailDate=Orders.OrderDate
FROM Orders INNER JOIN INSERTED I ON Orders.OrderID = I.parentOrderID

The fix was include a join to the table we are updating (OrderDetails) and limit which records it is going to affect by joining the OrderDetails table to the Inserted table.

UPDATE OrderDetails
SET DetailDate=Orders.OrderDate
FROM Orders INNER JOIN INSERTED I ON Orders.OrderID = I.parentOrderID INNER JOIN OrderDetails d on d.DetailID = I.DetailID

In the example above, the Inserted table would have 1 record in it, with the values of DetailID=20 and parentOrderID=4.  Our trigger would update a record in OrderDetails that had the same DetailID of 20 and set the DetailDate equal to the OrderDate of the Order we were joined to.

Posted in SQL Server, Uncategorized | Leave a Comment »

How to Decrypt RSA with SHA512 in .Net for FIPS compliance

Posted by robkraft on October 29, 2014

We recently installed software at a client that had FIPS compliance enabled on their Windows 7 PCs and our software encountered an error trying to decrypt some of our license keys at the client.

Our fix was simple, but I only figured out how to do it by using Telerik’s JustDecompile on mscorlib.dll to figure out what was happening.

We are using RSACryptoServiceProvider and SHA512.  Our code looked like this:

success = rsa.VerifyData(bytesToVerify, CryptoConfig.MapNameToOID(“SHA512″), signedBytes);

but that does not work because the “SHA512″ passed into the MapNameToOID function will return the code for a SHA512 implementation in windows that is not FIPS compliant.  In fact, MapNameToOID does not support any FIPS compliant SHA512 implementation.

The only way we could get the decryption to verify was to pass a string with the full classname as the second parameter instead of using the MapNameToOID method:

success = rsa.VerifyData(bytesToVerify, “System.Security.Cryptography.SHA512CryptoServiceProvider”, signedBytes);

Microsoft has a list of their algorithms here:

http://msdn.microsoft.com/en-us/library/system.security.cryptography.cryptoconfig(v=vs.110).aspx

For more about FIPS I recommend starting with this article:

http://blog.aggregatedintelligence.com/2007/10/fips-validated-cryptographic-algorithms.html

Posted in Coding | Leave a Comment »

Advantages of Pure Functions and Idempotent Functions in .Net

Posted by robkraft on June 28, 2014

Summary

This article will define a “Pure Function” and an “Idempotent Function”. It will also explain the advantages of each type of function using examples in C# .Net.

Definitions

A method is a “Pure Function” when it always returns the same results when given the same inputs.  This method is a “Pure Function”:

An Idempotent method

This method is not a “Pure Function”:

A method that is not Idempotent

“Idempotent Functions” have two slightly different definitions.

  • In much of computer science, an “Idempotent Function” is a function that always returns the same result when given the same input and can include calls to methods that retrieve data from a database and also calls like HTTP GET and HTTP DELETE.  Methods that can be called repeatedly that will return the same result are “Idempotent”.

 

  • In “Functional Programming”, “Idempotent functions” are more like “Pure Functions” that go a step further. In “Functional Programming”, if you take the output of an “Idempotent function” and call that function again using the output of the last call as the input for the next call, you will get the same result again. Here is an example of using an “Idempotent Function” named abs:

var x = abs(-3); //x = 3

var y = abs(x); // y = 3

The result of calling abs(-3) is 3.  And that is the same result as calling the abs(abs(-3)).

More succinctly:

abs(abs(x)) = abs(x)

Apology

I previously wrote an article that I titled “Advantages of Making Your Methods Idempotent and Static in C#“, but I misunderstood idempotent. My article was really about “Pure Functions”, not “Idempotent Functions”. So in this article I attempt to make amends for putting something untrue on the Internet.  I want to think Phil Atkins in Cambridge, UK for patiently and persistently helping me to realize the error in my original article.

Advantages of Pure Functions

When adding methods to classes, many developers spend little time deciding if the method should be a pure function. Making use of the properties in the class causes the method to not be a pure function, as in this example of the method named ReturnNumberOfDaysSincePersonWasBorn:

Here is the ReturnNumberOfDaysSincePersonWasBorn method re-written to be a pure function:

A method that is not Idempotent

And here is another version of the method that is not a pure function. This time, the method is not a pure function because it is changing the value of a variable (Age) scoped outside of the method.

Here are the advantages to methods that are pure functions:

  • The methods are easier to maintain. They are easier to maintain because a developer needs to spend less time analyzing the impacts of changes to the method. The developer does not have to consider the state of other inputs that are used in the method. When considering a change to a method that is not a pure function, the developer must think about the impact on properties used in the method that were not passed in.
  • The methods are easier to test. When writing a unit test, it is easy to pass values into the method and write the test to verify the correct output. But writing tests on methods that are not pure functions take longer and are more complicated because more setup or injection of values is necessary for the test.
  • The methods are easier to re-use. You can call the method from other places in the module, or call the method from several different modules easily when the method is a pure function.
  • The methods are easier to move to other classes. If you have a pure function in one class and you want to move it to a utility class to be used by other modules, it is easy to do so.
  • The methods are more likely to be thread-safe. Pure functions don’t reference variables in shared memory that are being referenced by other threads. Caveat: variables such as objects that are passed by reference could still experience threading problems when used inside of static methods that are pure functions.

Resist the Temptation!

When you encounter a case like the one below, where Method3() needs the BirthDate for a calculation, it can be tempting to change the methods from static to be non-static and to reference the BirthDate property in Method3(). The other alternative is to pass the BirthDate into Method1(), and from Method1() to Method2(), and from Method2() to Method3(). Although we don’t like to make those changes, doing so allows keeps the method as a pure function and keeps the advantages provided by pure functions.

Resist the Temptation to Lose Idempotency

In C# .Net, if you are unable to mark your method “static”, then it is probably not a pure function. If the method makes database calls, API calls, updates properties outside of the method, or uses variables that have scope outside of the method, then it is not a pure function, but it may still be considered idempotent according to the non-functional definition of idempotent.

Advantages of Idempotent Functions

If you are not a functional programmer and you define idempotent functions as “functions that always return the same result for specific inputs”, then the benefits of idempotent functions are the same as the benefits of pure functions, except when they involve retrieving or update data in locations outside the function.  I rarely think about idempotent functions from a functional perspective, but idempotent functions do provide the following benefits to intelligent code compilers.

  • In some languages, if a compiler recognizes that calling abs(abs(abs(abs(x)))) will always return the same results of abs(x), it can substitute abs(abs(abs(abs(x)))) in the written code with the more efficient abs(x) in the compiled code.
  • In some languages, if a compiler recognizes that a function is idempotent, it may be able to cache the result of the call to the function and provide the cached value in place of making calls to the function again with inputs previously used.

In functional languages, the primary advantages of idempotent functions do not apply to the developer writing and maintaining code, but rather to the compiler and performance of the program.

 References

http://en.wikipedia.org/wiki/Idempotence#Computer_science_meaning

 

Posted in Code Design, CodeProject, Coding | Leave a Comment »

Advantages of Making Your Methods Idempotent and Static in C#

Posted by robkraft on June 22, 2014

THIS ARTICLE IS INCORRECT

I apologize for putting incorrect information on the Internet.  This article explains “Pure Functions”, not “Idempotent Functions”.  I have written a new article to provide a more accurate description of “Idempotent” and also “Pure Functions” here:

https://csharpdeveloper.wordpress.com/2014/06/28/advantages-of-pure-functions-and-idempotent-functions-in-net/

My thanks to Phil Atkins in Cambridge, UK for making me aware of my error.


Read the rest of this entry »

Posted in Code Design, CodeProject, Coding | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 137 other followers