Rob Kraft's Software Development Blog

Software Development Insights

Archive for the ‘Coding’ Category

The Project Management Triangle Fails To Value Uncertainty

Posted by robkraft on June 22, 2020

triangle-3125882_640

Most students of Project Management are familiar with a decision making tool known as the Project Management Triangle.  It is also known as the Iron Triangle, or the Triple Constraint.  The concept provides a mechanism for the product owners to make decisions.

In a nutshell, the concept states that you may control two of the three project factors (Time, Cost, and Scope/Quality), but the third factor is mostly out of your control.  Therefore you can:

  • Have the project fast and within scope (all features desired), but then it may exceed the desired cost,
  • Have the project fast and within cost, but then you may not get all the features in scope,
  • Have the project within cost and within scope, but then you may not get it on time

The Project Management Triangle is used by project managers in many industries, but in software development another factor often needs to be considered when making project decisions and that factor is uncertainty.

sign-575689_640

Uncertainty

Uncertainty doesn’t fit neatly in the Project Management Triangle.  Nor can you simply change the triangle into a square by adding uncertainty because uncertainty is not affected consistently with changes in the other three factors.  A decision that increases uncertainty may increase the time, increase the cost, decrease the quality, or even do all three.  Conversely, a decision that increases uncertainty could have the opposite affect on any or all of the sides of the triangle.

Side Note: Quality as Part of Scope

I believe that scope includes the aspect of quality.  There are two types of software quality.  One is the richness of features as seen by the users of the software.  The other is the robustness, integrity, and ease of maintenance of the code, deployed solution and ecosystem the solution runs in.  Scope can be easily equated with the quality visible to the users of the software.   But the quality of the ecosystem and maintainability of the source code is not so easily identified by users of the software.  Even so, for purposes of this article, I think that both types of quality can be included within scope.

The problem with the introduction of uncertainty is that, by definition, the decision makers don’t know if the detrimental effects will come to pass; only that detrimental effects are more likely to happen when uncertainty is introduced.  Therefore decision makers must decide if the change that is intended to reduce cost, reduce time, or increase quality, is worth the uncertainty that comes with it.

risk-3576044_640

Proactively Reducing Uncertainty

Some decisions reduce uncertainty.  In fact, project managers sometimes deal with risks by creating a new project to explore the uncertainty and eliminate the risk it entails.  I hesitate to call out benefits of the waterfall methodology of development, but risk reduction might be a step more easily adopted in waterfall, as opposed to agile, because waterfall is more likely to identify and consider the risks at the start.  This allows for developing a prototype to test the use of a new tool, component, or framework, in order to understand it better and reduce the uncertainties.  For Example, NASA reduces risks to its human astronauts by first launching unmanned rockets instead space.

Acknowledging Uncertainty

In the world of software development, we often must choose the solution to a problem from a menu of options.  Some options increase quality but at the cost of time.  Others decrease time but at the risk of quality.  And some increase uncertainty, particularly when the option introduces a new technology, tool, or process.

When a project has a tight deadline, a team should be less willing to risk an option that introduces new technologies, and thus uncertainty.  Also, when a project has high visibility or life-endangering impact, a team should be less willing to risk an option that introduces new technologies, and thus uncertainty.

Here are some examples:

  • A team discovers a defect during product development.  The team can choose a quick fix, which is to log an error message to a local text file, or choose a more robust fix, which is to log the message to a cloud-based service.  The robust fix is more desirable but might cause failures for some clients that implement the product in unexpected environments.  Is the team willing to accept the risk of the robust fix, or should they implement the quick fix and take more time to test and prove the robust fix before providing it in a future release?
  • A team chooses a technology they have not used before.  They choose to use .Net Core instead of the .Net Framework.  They accept that there are uncertainties, which means risks, to successfully developing with the technology as well as deploying it and supporting it.  They develop successfully but when they deploy they discover .Net Core is not supported on the server they deploy to.  They realize they need to upgrade the OS on the servers in order to support their application, but the IT team won’t be able to perform the OS upgrade for several more months.  This may be acceptable, but if the team had a tight deadline for the project then perhaps this project was not a good candidate due to the uncertainties that came with the new framework.
  • A team chooses a stronger encryption technology for the next release of their software.  When the software update deploys to some desktops it fails and the end users are unable to use the product.  The failure occurs because the encryption algorithm, a new technology thus containing uncertainties, is not supported by Windows Vista, an operating system that some customers still use.  You can probably guess how I came up with some of these examples. 🙂
  • A team chooses to skip significant testing of the software in order to release on time, within budget, while providing the scope desired.  Most all members of software development teams can tell you that inadequate testing increases the risk that the software will encounter problems in production use.

In the course of my career we have often opted for one solution over another not because one solution would reduce cost, reduce development time, or improve quality.  Instead, the solution we chose was the one with the least risk.  This was usually the choice when our release deadline was near and we did not want to introduce uncertainties.

Mitigating Risks

A technique we use to reduce the negative impact of risks is to use the new technologies  in minor features in our applications.  For example, when deploying a new logging framework we used it only in one minor component.  When we created a framework for sending emails via SMTP we only used it in an optional feature.  As we grew confident with the new technologies and increased their robustness, we expanded their use across the application.

Here are a few good articles about the traditional benefits of the Project Management Triangle:

https://rapidbi.com/time-quality-cost-you-can-have-any-two/

https://en.wikipedia.org/wiki/Project_management_triangle

Posted in Coding, Process, Project Management | Leave a Comment »

Programmer’s Dilemma – Should you fix bugs before adding new features?

Posted by robkraft on June 19, 2020

The Scenario:

The purchase order web app worked very well for the first four departments that used it, but when the art department tried to use it they experienced some problems.  A few features, printing and file uploads, did not work.  The art department liked the app despite the problems, but they felt they could be more productive if all the features worked for them.

browser-773215_640

Developers determined that differences in the way the Safari browser behaved on touch tablets caused the issues.  The other departments used the Chrome browser on desktop PCs.  The two developers estimated that it would take them a month to rewrite to app to work on Safari also.  However, the two developers had several new projects queued up to work on.

Should the developers fix the first app to work on Safari before starting work on the new projects?

Let me provide you some additional information for consideration.

The developers have six new projects queued up, each will take them about one month to complete, and each is each estimated to save the company $50,000 per month when completed.  Fixing the bugs in the first app to work better on Safari will make the art department more productive and save the company about $5,000 per month.

scale-2247165_640

Based on this information, you could reasonably conclude that the return on investment of completing the new projects is higher than fixing the problems on the first project.  Therefore the developers should complete the next six projects before considering fixing the first project.

 

But let me provide you even more information to consider.

What if the knowledge gained and the frameworks created fixing the first project could be re-used across the next six projects guaranteeing that all projects work on both Chrome and Safari?  And what if the next six projects are public facing and therefore likely to be used by people using IPhones with Safari browsers via a touch interface?  The developers might be able to create the future apps more quickly and with higher quality.

mathematics-757566_640

Additional information may make decisions more difficult, but these are examples of the challenging decisions we face in software development.  Many variables contribute to our decision and most variables can’t easily be measured in dollars and cents.

much-3890920_640

I once attended a meeting with several customers of our software application.  We had a list of projects we could work on.  Some projects were for adding new features and others for fixing problems.  I’ll never forget what one business owner said, after another business owner had just argued that some of the problems in the software were costing them money and needed to be fixed first.  He said:

“I think of my company like a bucket with holes.  It has water, in the form money, pouring into it from the spigot, but it leaking.  We can spend time fixing the holes, or we can run the water faster.  Which way will get the bucket full the fastest?”

More succinctly, “Which course of action provides the most value?”.  That is the question we should be asking ourselves with every decision, every day.

Side note:

Shouldn’t we always fix bugs before adding new features?  Some development teams follow this philosophy but each team should decide for themselves if that is the right philosophy for them.  Fixing bugs before starting new features is more valuable for features that have not yet been released than when addressing bugs after they have been released.

Posted in Coding, Process, Project Management | Leave a Comment »

Programmer’s Parable – Gathering Requirements Up Front

Posted by robkraft on June 12, 2020

 

conversation-1262311_640The business owner asked the software developers to build a new web application for a single client.  She said that the web app needed to be really fast.  She told the developers this could be the first version of a new product that the company would build upon for years to come.

The developers were excited to work on something new.  They wanted to put their Agile skills to use and see if they could deliver the product within the expected two months while hitting their quality and performance goals.  They wondered what the product owner had in mind for the future beyond the first two months, but she was not readily available so they figured they would follow the Agile practice of focusing on one thing and getting it completely done before thinking about future features.

The team built a great site on time and within budget.  They cached as much data as possible to make it really fast.  The business owner was pleased.  The customers was pleased.  The developers were pleased because they had made everyone happy.  But the cloud IT manager was not pleased.  He noticed the price charged to the client barely covered the hosting cost of running the application in the cloud.

dancing-156041_640

At the party to celebrate the success of the web site, the cloud IT manager asked the business owner if she had a plan to reduce the hosting costs so that the return on investment (ROI) could be earned more quickly.  She laughed and said, “Of course I have a plan to improve ROI, but it is not by reducing hosting costs.  Now that we have one client on board, I’m going to have the team add dozens more clients to the same web site.  The incremental cost of adding each new client should be trivial, just like it is with our other products.  Once we have more clients, the hosting costs will seem trivial!”  She walked away to talk to someone else without recognizing the doomed expression upon the faces of the developers that overheard the conversation.

The Scrum Master saw the developers looking sad.  He asked them why they looked sad.  A developer replied, “She said she wants us to add more clients to the application.”, to which the Scrum Master replied, “Yes, I know.  It is going to be the top priority in our queue for the next iteration.  But why does that make you sad?”  The developer took a deep breath and said, “Because the only reason our web app is so fast is because we cache so much data on the server to avoid database calls.  The data cached is unique for the client.  We didn’t know that we would be expected to support multiple clients on the same web app.  We are going to have to redesign the entire app to support this.  Plus we don’t know if we can make it as fast as it is now.  If we had planned for the next release when we were developing the first version, we would have been able to avoid the complete rewrite that is now needed.”

checklist-150938_640

Analysis

Is it Agile if you engage in Big Design Up-Front (BDUF)?  The answer to that question is to point out that this is the wrong question.  In software, the question should never be, “Is it the Agile way?”.  The question should be, “What it is the best way?”.  Also, in this parable, the problem is lack of future requirements, not lack of future design.

In hindsight, knowing there would be future requirements, the team should have gathered some of those requirements up front.  Gathering requirements and creating designs for future releases is not “anti-Agile” and does not violate any rules of Agile development.  Getting requirements and thinking about design early is good.  You should only question “the amount of time” you spend gathering requirements and creating designs for future releases; not whether or not you do so at all.  If the time you spend on requirements and design for a future release moves beyond hours, beyond days, and into weeks, then you should become concerned that the time you are spending will become waste.  When that time is measured in a few hours, it is probably well worth it.

https://en.wikipedia.org/wiki/Big_Design_Up_Front

Posted in Code Design, Coding, Process, Project Management | Leave a Comment »

Programmer’s Parable – The Last Minute Decision

Posted by robkraft on June 7, 2020

accountant-2252316__480

The software developers looked forward to making the desktop software available for download tomorrow.  They had worked hard to create a high quality, robust program to support customer needs for many years to come.  The business owner stopped by and asked if they had included the discount on purchases for his brother’s company into the software.  The developers looked surprised.  They did not know about this requirement.  The team lead admitted, “I’m sorry, I forgot about it.  You mentioned it to me when we had lunch a few months ago but I did not write it down.”  The owner, said, “My brother will be one of our first customers, can you add this feature today before we release?”  One team member, a mid-level software engineer, spoke up, “Yeah, I think we can make a few simple changes to show a discount in the desktop user interface (UI) only for your brother’s specific account number and it should work fine.”  The response seemed to please the owner, but the software architect replied to the mid-level engineer, “I’m not so sure.  That may create new problems.”  He continued, addressing the product owner, “We will discuss it and get back with you.”

So, the developers held a meeting to discuss the problem.  They estimated they could modify the desktop program in a few hours as the mid-level engineer suggested to solve the problem in the short term, but what if additional customers needed a discount?  A programmer would need to make another code change and they would need to publish a new version as a patch.  And how would that logic get included into the mobile app they intend to release later that year?  They would have to code a similar workaround there.  They would have to maintain the quick fix code in several places.  Putting a quick fix into the UI introduces risks that the logic won’t be applied elsewhere when needed; it would also be harder to automate unit tests for the new logic; and it would require programmer intervention for future similar changes.  Another option, and the one the architect preferred, was to add a column to a table to indicate the discount, change the stored procedures and business objects to handle the discount; and also create a new user interfaces to allow the product owners to configure discounts without programmer intervention.

freelance-1989333__480

The developers made quick estimates and presented these three options to the product owners to make a decision:

Option 1: A quick fix.  They estimated a few hours to complete coding and testing the work.  The benefit would be releasing on time.  The drawback would be a design that contained risks that this business rule is not maintained across the entire code base and that additional patches to the code would be needed if the hard-coded discount changed or needed to be applied to other customers.  Another drawback is that the software architect did not favor this approach and there were already concerns that he was getting frustrated with the company and might choose to quit.  The mid-level engineer however favored this approach because it would work, and it would get the product to the customers faster so they could be happier faster.

Option 2: Delay the release for two weeks to allow the team to improve the design to support discounts that could be configured by business users without future changes to the code.  The benefit would be a design that is more adaptable to changes in product discounts in the future and that the product discount logic could be shared with the mobile app.  The drawback is that they would have to tell the customers the product release is delayed for a few weeks.

Option 3: Perform the quick fix, but immediately start working on the improved design to allow users to configure the discounts.  The benefit would include releasing the product on time, and the architect could get the good product design he was looking for.  However, the drawback to this approach is that it would take four weeks to roll out the product fix with the new design because extra work would be needed to version the API due to the changes required to it and also to migrate the database changes from one version to the next as well as to perform all the upgrade and migration testing.

Which option would you chose?

There is no correct answer, but one choice may be better than another depending on the situation.

If the product is a Christmas promotion application for the year 2020, then the quick fix of option 1 is likely best assuming the program will only be used for a few months before it is discarded.

software-4429957__480

If the product will help its users start making or saving tens of thousands of dollars every day they use it, then you probably want to get it to them as quickly as possible, thus the quick fix of option 1 or option 3 is likely the best option so the gains can be realized even though a higher cost is placed development.

If the development team has other valuable applications to develop, then perhaps you should choose option 1 instead of option 3 because the return on investment (ROI) of another application is greater than the cost and eliminating the risks than come with option 1.  Perhaps the business has another product that will earn a $100,000 per day once deployed.  Should the developers spend a few weeks or a month making the first product better when it only brings in $5,000 per day instead of starting work on the second product that will earn $100,000 per day?  If the profit earned daily is high, then you should probably lean toward option 1.  If the profit earned daily is low, then you should probably lean toward option 2, assuming the application will be around for years to come.  If developer costs are low; particularly if your developers have nothing else they could be doing with their time, then option 3 is probably a good choice.

board-3700116__480

If developer morale is important and if the decision seems to have a big impact on developer morale, then perhaps you should accept the approach they recommend in order to boost their morale and increase their sense of ownership in the product.

We, and here I am speaking as a developer myself, often desire to fully complete a product or feature we were working on.  There is satisfaction and value in doing so.  But sometimes maybe it is best to leave some code and applications partly sub-optimal because there are things we could be doing with our time that are more valuable to the company.

In software development we often have many factors to consider in making product decisions.  Rarely are the costs and benefits clearly measurable.  Even time spent in analysis causes a delay in execution and therefore contributes to waste and increased costs, so it is sometimes counter-productive to spend a lot of time in analysis.

 

 

Posted in Code Design, Coding, Process, Project Management | Leave a Comment »

How To Configure Your .Net Core Web App to Run OutOfProcess and Publish It That Way

Posted by robkraft on May 31, 2020

I put a .Net Core web site on my new webhost, SmarterASP.Net and published it from within Visual Studio 2019 with no problems. A few weeks later I added a second .Net Core web site to my account and published it the same way then discovered my sites were returning 500.34 and/or 500.35 errors.

Both of my sites are running in the same App Pool on IIS and are using the defaults for .Net Core 3.1 which means they are runnning “inprocess”. But we can only have one .Net Core 3.1 app running “inprocess”. I needed to move both of them to run “outofprocess”, which means they will run on Kestrel instead of IIS. I have no concerns about that, but I had trouble figuring out how to tell my “publish” profile in Visual Studio that I wanted the deployed site to run OutOfProcess. There is an option in the Project Properties Debug tab, but that does not carry over to the web.config that gets created when I publish my app through Visual Studio using FTP.

To change a .Net Core 3.1 app to OutOfProcess so that when you publish it the web.config for the published site also uses OutOfProcess, you need to manually edit your .csproj file and add an attribute AspNetCoreHostingModel in the Property Group that has your TargetFramework:

  <PropertyGroup>
    <TargetFramework>netcoreapp3.1</TargetFramework>
    <AspNetCoreHostingModel>OutOfProcess</AspNetCoreHostingModel>
  </PropertyGroup>

Posted in Coding, Dev Environment | Leave a Comment »

How To Reduce The Connection Pools For a .Net Web Site Hitting Multiple Databases

Posted by robkraft on May 9, 2020

Microsoft’s connection pooling built in to the .Net Framework greatly improves performance by allowing hundreds of database calls to share the same pool of connections used to connect to the database. The .Net Framework creates a new pool of connections for each unique connection string. This works fantastic when your web application connects to a single database and all your connections use the same connection string. It even works well if you have two or three different connection strings, but for each unique connection string value, the .Net Framework creates a new connection pool. If your web site has dozens of different databases to connect to, you may find your application creates dozens of connection pools which begins to consume a lot of resources on both the web server and SQL Server.

There is a fairly easy way to eliminate this problem, and that is to use the ChangeDatabase method on the Connection object.

Our web application was connecting to one of fifty different databases that all reside on the same instance of SQL Server. Each of our clients has their own database. But our application was creating fifty different connection pools. We were able to have all the clients share a single pool of connections by first connecting to a neutral database, then redirecting the connection to the desired database.

We created a new database on the server, called Central, then we had all the connections to the database first connect to Central as the Initial Catalog. The next step was to call ChangeDatabase to switch to the desired database. This technique did not create a new connection to SQL Server and did not create a new connection pool on the .Net client. Microsoft documentation mentions this technique here: https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling#pool-fragmentation-due-to-many-databases , but their example code does not show the ChangeDatabase method, which is more efficient than their example.

In a nutshell, you can do this:

connection.Open();
connection.ChangeDatabase(“Client1”);

I created a simple Windows Form App to test the idea.  The test app runs a query to return the name of the current database it is in.  Also, on the SQL Server I run a query to see all the connections that exist.  Doing this I was able to prove that using ChangeDatabase did not create a new connection to SQL Server and my .Net SQL Connection Object was pointing at the correct desired database.  Sample code, query, and output is below:

Sample C# .Net Forms App:

using System;
using System.Data.SqlClient;
using System.Windows.Forms;

namespace SQLConnectionTest
{
	public partial class Form1 : Form
	{
		//https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling#pool-fragmentation-due-to-many-databases
		private void button1_Click(object sender, EventArgs e)
		{
			using (SqlConnection connection = new SqlConnection(@"Server=.\;Database=master;Trusted_Connection = yes"))
			{
				connection.Open();
				RunSQL(connection, "select DB_NAME()");
			}
			using (SqlConnection connection = new SqlConnection(@"Server=.\;Database=model;Trusted_Connection = yes"))
			{
				connection.Open();
				RunSQL(connection, "select DB_NAME()");
			}
			using (SqlConnection connection = new SqlConnection(@"Server=.\;Database=msdb;Trusted_Connection = yes"))
			{
				connection.Open();
				RunSQL(connection, "select DB_NAME()");
			}
			using (SqlConnection connection = new SqlConnection(@"Server=.\;Database=master;Trusted_Connection = yes"))
			{
				connection.Open();
				RunSQL(connection, "select DB_NAME()");
			}
			using (SqlConnection connection = new SqlConnection(@"Server=.\;Database=master;Trusted_Connection = yes"))
			{ 
				connection.Open();
				connection.ChangeDatabase("TempDb");
				RunSQL(connection, "select DB_NAME()");
			}

			using (SqlConnection connection = new SqlConnection(@"Server=.\;Database=master;Trusted_Connection = yes"))
			{
				connection.Open();
				RunSQL(connection, "select DB_NAME()");
			}
			using (SqlConnection connection = new SqlConnection(@"Server=.\;Database=master;Trusted_Connection = yes"))
			{
				connection.Open();
				connection.ChangeDatabase("TempDb");
				RunSQL(connection, "select DB_NAME()");
				connection.ChangeDatabase("model");
				RunSQL(connection, "select DB_NAME()");
			}
		}

		public static void RunSQL(SqlConnection connection, string queryString)
		{
			SqlCommand command = new SqlCommand(queryString, connection);
			try
			{
				SqlDataReader reader = command.ExecuteReader(); 
				if (reader != null)
				{
					while (reader.Read())
					{
						Console.WriteLine("\t{0}", reader[0]);
					}
					reader.Close();
				}
			}
			catch (Exception ex){Console.WriteLine(ex.Message);}
		}
		public Form1()
		{
			InitializeComponent();
		}


	}
}

Output from Running the App written to console:

master
model
msdb
master
tempdb
master
tempdb
model

Query to see connections on SQL Server:

CREATE TABLE #TmpLog
(SPID int, Status VARCHAR(150), Login varchar(100), HostName VARCHAR(150), BlkBy varchar(30), DBName varchar(60), Command varchar(500),
CPUTime int, DiskIO int, LastBatch varchar(20), ProgramName varchar(200), SPID2 int, requestid int)
INSERT INTO #TmpLog
EXEC sp_who2
SELECT SPID, Status, Login, HostName, BlkBy, DBName, Command, ProgramName
FROM #TmpLog
where STATUS not in (‘BACKGROUND’) and Command not in (‘TASK MANAGER’) and ProgramName like ‘.net%’
DROP TABLE #TmpLog
go

Results of running query on SQL Server:

Results1

Posted in Coding, SQL Server | Leave a Comment »

A C# .Net Dialog For Connecting to SQL Server

Posted by robkraft on May 7, 2020

I created a Windows C# .Net Form and made it an example for anyone that wants to create a prompt for logging into SQL Server that looks at behaves like SQL Server Management Studio and other Microsoft products.

The code can be found here: https://github.com/RobKraft/SQLServerLoginTemplate

The code does not “share” the saved credentials with the Microsoft products. It stores those credentials in User Specific Settings for the specific application. Passwords are encrypted using the Windows Machine key (DPAPI), therefore this code will not work if used on Linux.

Posted in Coding, SQL Server | Leave a Comment »

SQL to Find Who Is Trying To Hack The SQL Server You Exposed To The Internet

Posted by robkraft on May 5, 2020

If you are running a SQL Server exposed to the Internet on the default port 1433 then hackers are probably try to hack in to it.  If the SQL Server is running on a different port it is less likely, but it could still be happening.  If you exposed a SQL Server to the world the hackers on the Internet will find it and will launch repeated login attempts against the ‘sa’ account.

That is a good reason for disabling the ‘sa’ account.

That is also a good reason for renaming the ‘sa’ account.

Most SQL Server admins record the Failed Login Attempts in the SQL Server Error Log.  If you have this option turned on, which is recommended, you may see entries like this:

Login failed for user ‘sa’. Reason: Could not find a login matching the name provided. [CLIENT: 113.64.210.68]

You can write an SQL to read the SQL Server ErrorLog and extract all the IP addresses that are attempting to login to your server using the ‘sa’ account.  You can then provide that list to your firewall to blacklist them, although a whitelist of known good addresses is a much better approach.  After you blacklist those IP Addresses, come back the next day and check your logs again because you will probably find a whole bunch of new IP addresses doing the same thing.  This game of ‘whack-a-hacker’ can be played for a long time before the hackers run out of IP addresses.  By the way, if you are going to try to blacklist the bad IP addresses, I recommend blocking entire blocks like 113.64.*.*, not a single IP Address at a time.

Here is an SQL to get you started with reading and filtering your SQL Server Error Logs.  Notice this script only looks for failed login attempts using the ‘sa’ account.  If you look at your logs, you will find failed login attempts for other accounts too.  Happy Hunting.

CREATE TABLE #TmpLog
(LogDate datetime, ProcessInfo VARCHAR(150), Text VARCHAR(max))
INSERT INTO #TmpLog
EXEC sp_readerrorlog
SELECT replace(Text,'Login failed for user ''sa''. Reason: Could not find a login matching the name provided. ',''), 
count(*)
FROM #TmpLog where Text like '%Client%' and Text like '%''sa''%' group by
replace(Text,'Login failed for user ''sa''. Reason: Could not find a login matching the name provided. ','')
having count(*)>10
order by 2 desc
DROP TABLE #TmpLog

 

 

Posted in Security, SQL Server | Leave a Comment »

How to Write Multiple Rows To A Google Sheet From C# .Net

Posted by robkraft on March 26, 2020

Google provides good documentation to help you get started reading from and writing to Google sheets using their .Net SDK here: https://developers.google.com/sheets/api/guides/values

But I didn’t find an example there or elsewhere about how to add or update multiple rows in a single API call to Google Sheets.  I figured out how to do it and am sharing here.

private static void InsertToSheet(UserCredential credential, List people)
{
	SheetsService service = new SheetsService(new BaseClientService.Initializer()
	{
		HttpClientInitializer = credential,
		ApplicationName = ApplicationName,
	});
	string sheet = "People";
	String spreadsheetID = "1sRHWW2fynHZoQl__yourspreadsheetid__wwxfDuMUM";
	var range = $"{sheet}!A2:C51";
	var valueRange = new ValueRange();
	valueRange.Values = new List<IList>();
foreach (var person in people)
	{
		var objectListthis = new List() { person.FirstName, person.LastName, person.BirthDate };
		valueRange.Values.Add(objectListthis);
	}
	var appendRequest = service.Spreadsheets.Values.Update(valueRange, spreadsheetID, range);
	appendRequest.ValueInputOption = SpreadsheetsResource.ValuesResource.UpdateRequest.ValueInputOptionEnum.USERENTERED;
	var appendResponse = appendRequest.Execute();
}
public class Person
{
	public string FirstName { get; set; }
	public string LastName { get; set; }
	public string BirthDate { get; set; }
}

The trick is to define valueRange.Values as a List, then add each or your rows to that list one at a time as shown.  When specifying the range I started with row 2 (A2:C51) because my first row has headers, and I go through row 51 because I am always inserting exactly 50 rows of data.  Also notice that in my example I am overwriting rows doing an “Update” instead of adding new rows via an “Append”.

Posted in Coding | Leave a Comment »

How to Emulate Azure Function Application Settings Read Via Environment.GetEnvironmentVariable on Local Machine

Posted by robkraft on March 25, 2020

I borrowed a bit of code from an existing .Net Framework app that I had used to write an Azure function years ago.  But my new app was based on .Net Core and the code to retrieve an Environment variable (Environment.GetEnvironmentVariable) did not pull a value out of my .json file.

It took me a little googling around to figure out how to make it work so I figured I might as well share my findings in a blog post.  It will probably help “Future Rob” fix the problem more quickly if no one else.

If you have code like the following that pulls values from the “Application Settings” in an Azure function:

data[“refresh_token”] = Environment.GetEnvironmentVariable(“refresh_token”);
data[“grant_type”] = “refresh_token”;
data[“client_id”] = Environment.GetEnvironmentVariable(“client_id”);
data[“client_secret”] = Environment.GetEnvironmentVariable(“client_secret”);

but you want to run that code on your local machine, you just need to put your values inside the “Values” node of local.settings.json as shown here:


{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"client_id": "xxx",
"client_secret": "yyy",
"refresh_token": "zzz"
}
}

 

Posted in Coding | Leave a Comment »