Tuesday, February 20, 2018

Skype v/s Visual Studio - Pair programming via Skype for business meeting

Last week I was doing a pair programming session with a colleague which works from the other part of the globe. The screen sharing was done through 'Skype for Business'. It was about a generic retry mechanism to be used with Azure KeyVault. Not like the retry code which is already available, we have to get the renewed key from Azure KeyVault in certain scenarios.

I was sharing screen. When ever we are in the intense debate and I do some code change the screen sharing gets lost. I continue with my arguments if the change tried was proposed by me and when I ask why you are silent, other guy tells he lost sharing. It happened  4-5 times when we were trying the changes proposed by me. Then I started thinking what my colleague may having thinking about the screen sharing. Will he be thinking that I am intentionally cutting the screen share? Never because we knew each other for long and we know how software from Microsoft works.

When it get frustrated, we decided to investigate on this. If we cannot solve our computer problem how can we solve others problem with computer? Step by step retries couple of times revealed the root cause.

Root cause

Its Ctrl+Shift+S short cut. As a practice from college where the computers were desktops and power may go at anytime, I used to save immediately after typing something. It continuing now also in the era of using laptops or even mobiles for programming. The short cut in Skype for Business to end screen sharing is also Ctrl+Shift+S.

The real solution is to depend on the Ctrl+Shift+B for building which saves all the files. But really its difficult to change.

https://support.office.com/en-us/article/keyboard-shortcuts-in-skype-for-business-42ff538f-67f2-4752-afe8-7169c207f659
https://support.office.com/en-us/article/keyboard-shortcuts-for-skype-for-business-74eda765-5631-4fc1-8aad-cc870115347a

Tuesday, February 13, 2018

Azure @ Enterprise - Moving databases from SQL VM to SQL Azure

Introduction

Enterprise will have so many databases running in its existing systems. If those systems are legacy the databases might have all the legacy features which SQL Azure (The PaaS offering not the SQL VM in Azure) does not support. How to move such databases to SQL Azure as part of Azure adoption. If anyone wonder why the SQL Azure is not backward compatible with standalone SQL Server versions welcome to PaaS. 

Research & Solution

If we google, we can get so many options on how we can move an on-premise database to SQL Azure. Some of those are as below.


After doing good research, the best option found to be the .bacpac mechanism using SQLPackage utility. As we can see in any production databases, the file groups will be all over the place to increase performance. the bacpac mechanism using SQLPackage will eliminate the file groups issue in it's latest versions.

Problems

But it may not be the easy and hurdle free migration road. Below are some issues.

SQLPackage fails on large tables

The SQLPackage.exe has its own timeouts. When there are large tables the timeouts may hit and it will error out. When it error out, there could be a message as follows.

Processing Table '[dbo].[large tables with millions of rows]'.
*** A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)

Solution

The message tells some network issue not specifically on timeout. But after trial and error this seems to be related to timeouts of SQLPackage.exe utility. It has some params to control the timeout. The usage is as follows

sqlpackage.exe /Action:Import /tsn:tcp:<databaseserver>.database.windows.net /tdn:<database name> /tu:<user> /tp:<password> /sf:<local path to file.bacpac> /p:DatabaseEdition=Premium /p:DatabaseServiceObjective=P15 /p:Storage=File /p:CommandTimeout=0 /TargetTimeout:600

The highlighted are 2 params worked out. The value depends on the size of the database. /p:Storage=File is must when we deal with large databases anyway. It cannot be the other option where memory is used which may drain out quickly.

SQL Azure Server supports only one collation

If the on-premise application is serving globally the collation of the database might have different than the collation supported by the SQL Azure Server.

If anyone is confused on whether there is a SQL Server for PaaS offering, yes there is one. This easily makes us think that there would be real VMs running behind he SQL PaaS offering.

Coming back to the problem, we can have database in SQL Azure with different collation as of SQL Azure instance. But if there are stored procedures which needs to access the system objects, they fail. The default answer would be to change the collation of SQL Azure Server. But unfortunately it is not supported. SQL Azure instance is kind of hard coded to 'SQL_Latin1_General_CP1_CI_AS' collation.  Microsoft has their own reason it seems. But as user what we can do?

Solution

Modify our SQL code to include collation or change the collation of our database to the collation of SQL Azure instance which is 'SQL_Latin1_General_CP1_CI_AS'. It is simple to say change the collation but in enterprise its sequence of approvals especially if the collation is set as standard across multiple applications.

Conclusion

It is nice and good to use Azure. It works well when the application is from scratch and cloud native. But when it comes to migration of existing applications, its nightmare. Sometimes feels like the Azure is not matured for enterprise.

https://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

https://github.com/Microsoft/vsts-tasks/issues/1441
https://social.msdn.microsoft.com/Forums/en-US/3dd204b3-603d-4c88-9f85-083f69323cd1/sqlpackage-publish-timeout?forum=ssdt
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-connectivity-issues
https://stackoverflow.com/questions/16089321/restore-a-bacpac-file-to-sql-azure-with-large-database-size-sqlpackage-exe
https://blogs.msdn.microsoft.com/azuresqldbsupport/2017/01/31/using-sqlpackage-to-import-or-export-azure-sql-db/

Tuesday, January 9, 2018

WCF - Concurrency, InstanceContextMode and SessionMode & request handling behavior

Though WebAPI is penetrating the service world in .Net WCF still has a space in .Net. The main reason is the support for protocols other than http and there are lot of legacy systems out there which has to be maintained till the world fully moved to Cloud native.

WCF is easy to develop but when it comes to production, there are so many internal things to be known than just writing the service. The major area to be known before putting any WCF service is how the service is going to serve requests. Will there be one service object? How many requests will be processed in parallel? In there are more than one requests getting processed how many service objects will be there to process? Is it one to one or one to many? If it is one service object processing  many requests what is means to thread safety? Service object here means the object of class which implements the service contract interface? Hope everyone knows who creates the object of implementation class.

There are so many articles out there which explains how the service process requests. So this post is to consolidate all in one place to avoid googling.

MSDN Article 1 & Article 2 - These are the best article if someone have enough understanding about the concept. If someone is really fresh to WCF this is not a great place to start

Sivaprasad Koirala on Instancing & concurrency - This is good place to start for freshers. He explains with diagrams and there is a good sample which can be used to understand the behavior. But it talks only instancing and concurrency. When instancing combined with Concurrency mode and Session, the behavior will change.

If we are in hurry and have to understand the behavior quickly the best site is given below.

https://whinery.wordpress.com/2014/07/29/wcf-sessions-instancing-and-concurrency-explained/

It explains with combinations and tell what happens in each combination. But the unlucky blog post has no comments till now even after 3 years.

It is not ending here. There is throttling behavior which might need to be tweaked. Also the security mode which gets enabled automatically for some bindings though we don't need it and reduce throughput.

Tuesday, January 2, 2018

ReST based Web Sites & ReSTful Navigation

Introduction

ReST is highly considered as architectural pattern for integrating systems. It mainly uses APIs
point to resource and do operation on those. The URL often follows a cleaner hierarchical path pattern without much key value pairs than conventional key value pair based URL schemes. People often follow the ReST based URL format for APIs but not widely accepted for web sites.

This post aims to investigate on bringing this ReST based URL schemes to web sites similar to APIs.

Why should I do create web sites as ReSTful URLs

The same benefit of ReST based API URLs applies here as well. The URL will be easy to remember. It is easy to have separation of concerns. New features can be totally implemented separately. using their own area / virtual directories. No need to mix with existing screens even it they are related. Simple and short URLs than lengthy story telling URLs.

eg: www.mycompany.com/employeelist.aspx?empId=1 can be easily represented via
www.mycompany.com/employees/1

Why there are not much web sites following ReSTful pattern

One reason could be the difficulty to follow the pattern. If it is product company the development team will get more freedom to select URLs. Again if the product owners don't know what is ReST and the advantages they may influence the URL pattern. In the other side the consulting industry is heavily driven by client demands. Though regular clients may not ask for particular URLs, semi technical clients may ask.

Some tips for ReSTful web site URLS

Below are some tips to design web site URLs in resource oriented way

No operation oriented screens

The screens should be pointing to resources to display those. For example the below URL displays the employee resource.

www.mycompany.com/employees/1

Lists and details screens

If the URLs display the resource how to edit them? Which screen to edit those resources? The better way is to use lists and details mechanism.

If we display the resources in a list those can be edited in the same list itself and save using a button. Here the single page application (SPA) concept helps us than navigating to an edit page.

Circular navigation

If the resources have circular relation the navigation may also becomes circular.
For example, the employee page may show the department where he belongs to as hyper link. Clicing that will navigate to dept page where it lists the employees in that dept including the manager. Going to the manager page may display employees under him and clicking on the same employee's link there will end up in same employees page where we started.

Multiple navigation paths

Similarly there would be multiple navigation paths to reach one resource. For example the home page of company may show the departments and various project it is doing currently. Navigating to department as well as project may end up in same employee page.

A powerful search experience to navigate

If a resource is buried under the hierarchy, it would be difficult to find it without multiple clicks. So better to have a search mechanism where the resource can be searched and navigated on clicking the associated URLs.

http://mikeschinkel.com/blog/welldesignedurlsarebeautiful/

Tuesday, December 19, 2017

Azure @ Enterprise - Is Azure Functions Enterprise ready?

"It Doesn't Work That Way in Enterprise". This is a famous quote when we work in enterprise. Many things which does in a tick at startups never work that way in enterprise. Coming to Azure development, if we want to create a new Azure Function it might be just an entry in ARM template in startup and next deployment will have the Azure Function, but it might be 2 weeks process in Enterprise. Sometimes the Azure Function might have to be created manually based on tickets in Enterprise after multi level approvals and refer its name in ARM.

The focus of this post is 'Is Azure Function Enterprise ready?' Azure Functions is the flagship product from Microsoft to meet the demands of Serverless world.

Signing the assemblies

The basic thing in .Net is to sign our assemblies before shipping them. If we use third party libraries and those contain unsigned assemblies, we cannot sign our assemblies. 

Why we are discussing this basics with Functions? The current Azure Functions SDK which is coming via nuget package has references / dependencies which are not signed!!!

At the time of writing this post, Visual Studio 2017 (15.4.0) creates Azure Function projects which has references to unsigned dlls.

To be more precise, SDK has dependency to Microsoft.Azure.WebJobs.Extensions.Http which has the assembly with same name. If we open the assembly in JustDecompile, we can see the below.

Microsoft.Azure.WebJobs.Extensions.Http, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

It is reader choice to decide whether a SDK with unsigned assembly is Enterprise ready.

Beta dependencies of Azure Functions SDK

No need to explain in detail. The Function SDK has dependency Microsoft.Azure.WebJobs.Extensions.Http which is still in beta1 stage. Yes its the same which has unsigned assembly referred above. The latest Functions.SDK version available at the time of writing this post in nuget repo is 1.0.7 and it still has this beta reference. Refer the below link for dependencies. The SDK is in production but not the dependencies.


Another thing Enterprise loves to put into production.

Pricing & vNet capability

If a service is internal ie called by other front end services or scheduled tasks etc..., enterprise never want to put it publicly. The solution in Azure Function to have internal service is to use the AppServiceEnvironment+AppServicePlan combination. What we loose there is the cost factor. ASE has fixed money in it and never pay per use.

Execution timeout of Functions

At the of writing this post the max timeout of a Function is 10 mins if it is in consumption plan ie the real pay per use model. But Enterprise may need to execute long running operations greater than 10 mins if there is queuing system or scheduled processes and those should be strictly internal. In order to achieve it they can never use pay per use model of Functions, instead use either ASE+Function or WebJobs. It would be considered as road block by enterprise users.

It seems Microsoft wanted to have an answer to Amazon Lambda in the Serverless world and someone hacked Azure WebJobs to become Azure Functions. But its really excellent idea for start ups to reduce time to market and reduce upfront investments.

Comparing Amazon v/s Azure Function for enterprise use is altogether separate topic.

Disclaimer

The Azure evolves at a fast pace and the facts might not be relevant in future.

Tuesday, December 12, 2017

Excel Function to get stock price without Yahoo finance API

Yahoo finance API was free and helping so many people to get their things done. As per the internet forums many were doing business around that API. One of the best use case is to pull stock price to Excel spreadsheets.

Unfortunately the API is discontinued now. So what is the alternative? Its nothing but another free API. A promising one is Alphavantage. We have to get their API key before using the API. Its just free. No idea whether they throttle or make the service priced later. The web site is given below.

https://www.alphavantage.co/

Below goes the Excel VBA code for having an Excel Function which accept the stock symbol and returns the price.

Public Function StockQuote(strTicker As String) As String
    Dim key As String
    key = "<YOUR KEY FROM Alphavantage>"
    If IsMissing(strTicker) Then
        StockQuote = "No input"
        Exit Function
    End If
    Dim strURL As String, strCSV As String, strRows() As String, strColumns() As String
    Dim dbClose As Double

    'Compile the request URL with needed data.
    strURL = "https://www.alphavantage.co/query?function=TIME_SERIES_DAILY" & _
    "&symbol=" & strTicker & _
    "&interval=15min&outputsize=compact&datatype=csv&" & _
    "apikey=" & key
    
    On Error GoTo catch
        Set http = CreateObject("MSXML2.XMLHTTP")
        http.Open "GET", strURL, False
        http.Send
        strCSV = http.responseText
    
        ' The most recent information is in row 2, just below the table headings.
        ' The price close is the 5th entry
        strRows() = Split(strCSV, Chr(10)) ' split the CSV into rows
        strColumns = Split(strRows(1), ",") ' split the relevant row into columns. 1 means 2nd row, starting at index 0
        dbClose = strColumns(4) ' 4 means: 5th position, starting at index 0
        StockQuote = dbClose
        Set http = Nothing
        Exit Function
catch:
        MsgBox (Err.Description)
End Function

Thanks to the original coder who posted the snippet for Yahoo API. 

Tuesday, December 5, 2017

Waiting on multiple C# .Net awaits

Introduction

Async and Await makes developers life easy without the callback hell in asynchronous programming. But it is equally harmful, if it is in the hands of typewriting coders. Mainly those who don't know how things are working can use async and await in wrong way. This post examines one of such scenario and how to avoid it.

Lets consider the below example. There are 2 independent web service calls to be made and once result is available, do some operation using the results from both the async calls.

private static async Task<string> GetFirstTask(HttpClient client)
{
            Log(nameof(GetFirstTask));
            return await client.GetStringAsync("http://httpbin.org/drip?numbytes=3&duration=3&code=200");
}
private static async Task<string> GetSecondTask(HttpClient client)
{
            Log(nameof(GetSecondTask));
            return await client.GetStringAsync("http://httpbin.org/drip?numbytes=6&duration=6&code=200");
}
private void Process(string first, string second)
{
            Log($"{nameof(Process)} - Length of first is {first.Length} & second is {second.Length}");
}
private static void Log(string msg)
{
            Console.WriteLine($"Thread {Thread.CurrentThread.ManagedThreadId}, Time {DateTime.UtcNow.ToLongTimeString()}, Message {msg}");
}

The first 2 methods returns 2 generic Task<string>. The URL is using httpbin.org which is a hosted service for testing purpose. The duration in the query string controls the delay. Meaning the response will be coming after that duration. Just to avoid Thread.Sleep(). The Process() just display it's parameters.

The normal way

Below is the code we can see more from new async await users.

async internal Task TestNormal_TheBadMethod()
{
    HttpClient client = new HttpClient();
    string firstrequest = await GetFirstTask(client);
    string secondrequest = await GetSecondTask(client);

    Process(firstrequest, secondrequest);
}

The output might be something like below.

Thread 1, Time 8:47:00 PM, Message GetFirstTask
Thread 9, Time 8:47:02 PM, Message GetSecondTask
Thread 7, Time 8:47:07 PM, Message Process - Length of first is 3 & second is 6

Problem

The line where GetFirstTask() is called will wait till the result is obtained. ie wait for 3 seconds to get response from web service. The second task will start only the first is completed. Clearly sequential.

await at method invocation

This is another way developers try.

async internal Task TestViaAwaitAtFunctionCall_StillBad()
{
    Log(nameof(TestViaAwaitAtFunctionCall_StillBad));
    HttpClient client = new HttpClient();
    Process(await GetFirstTask(client), await GetSecondTask(client));
}

Output will look as follows.

Thread 1, Time 8:49:22 PM, Message GetFirstTask
Thread 7, Time 8:49:25 PM, Message GetSecondTask
Thread 9, Time 8:49:30 PM, Message Process - Length of first is 3 & second is 6

Problem

In other languages await keyword at function invocation might make it parallel. But in C# its still sequential. It wait for first await and then process second.

Making it run parallel

So what is the solution? Both the Tasks should be created before we wait for their results. So those tasks will run in parallel. Once await is called, they just give the result if available or wait till the result is available. So the total time is the highest time, not sum of all wait times. Below code snippets does it.

private async Task TestViaTasks_Good()
{
            Log(nameof(TestViaTasks_Good));
            HttpClient client = new HttpClient();
            Task<string> firstrequest = GetFirstTask(client);
            Task<string> secondrequest = GetSecondTask(client);
            Process(await firstrequest, await secondrequest);
}

Output looks below.

Thread 1, Time 8:55:43 PM, Message GetFirstTask
Thread 1, Time 8:55:43 PM, Message GetSecondTask
Thread 8, Time 8:55:48 PM, Message Process - Length of first is 3 & second is 6

Here the Tasks are created before any waits are places on them. So they worked in parallel.

Will this work when second call dependent on first call's result

Not at all. Because the second call cannot start without the result from first call. So this has to be sequential.

More reading

https://stackoverflow.com/questions/33825436/when-do-multiple-awaits-make-sense
https://stackoverflow.com/questions/36976810/effects-of-using-multiple-awaits-in-the-same-method