DevQuest

Just build it… or not?

Sometimes it is hard to start a blog post, you have all what you want to say in your head but you cannot find the “Program/Main” or entry point of your thoughts. So here I am, I will just list what is in my mind and hopefully it will make sense at the end.

Ok so, the app you are working on it is built with different tools and/or frameworks and you are asked to implement a new feature. The first thought is: “Let’s see what this framework has to offer”, or “let’s see if there is already a widget in this toolbox I am using here,… etc.”. But when you start digging into it you realize that there is something similar but not quite what you need, or the design is completely off, or the back-end communication is not fitting your model. So you dig dipper, you start hacking into the widget/control or other feature/functionality and you say: “…ah, ok this work this way but I can change it here and make it work…” then you find another obstacle and you solve it and so on. Did you ever get to the end of this process with a result that the only thing it reminds you of is Frankenstein? Yes, something like a patch work, something that…. if you do not have enough “electricity” you cannot give it “life”, something that can break any minute.

At that point you spent hours if not days through documentation and testing and writing code, and you think:”Is it worthy?….shouldn’t I just build it?”. Well , maybe. But then thinking of: “do not reinvent the wheel!!”, “use and re-use existing code”, “someone for sure have already done the work”, “isn’t what Google is for?….” one is starting to have doubts. So I guess my point is: when do we stop searching and start building? Great question isn’t it?

I guess sometimes we go too far in our search for the solution because we think that out there there is always someone better then us, someone that has already thought about it and made a great product. So is it a problem of self-confidence? But then if you have too much of it you end up putting too many hours into it and your boss will complain. So again, it is hard to find the right balance.

I leave it at this and I am open to ideas and suggestion. Tell me what your mental process is and share it with the community.

DevTools

Test APIs bypassing CORS

I’ve been struggling for a couple of days to get this app I am working on to POST on to its back-end.  After tons of tries, many Stackoverflow reads and ajax call property reset, I got to the working solution, and I want to share it the community so that the next person doesn’t have to waste time.

Here is plain and simple, add this flag to the Chrome call (if you use Chrome):

--user-data-dir="C:/Chrome dev session" --disable-web-security

Explanation: CORS (Cross-Origin Resource Sharing) is a rule by which a call to an API from an app no residing on the same domain is rejected.  Unless the API server is set to accept calls from any app.  This rule is mainly enforced by the browser, without going too deep into the details of how this works, the best way to avoid this while developing , is to disable web security in the browser.

In doing so, it turns out that that the most recent Chrome versions do not accept the “–disable-web-security” flag by itself, you must also set the user data directory.  Therefore, create a directory of your choice and ad this flag along with the previous one: –user-data-dir=”my-data-directory”.

For some reason running this

“Chrome.exe --user-data-dir="C:/Chrome dev session" --disable-web-security”

from the command line, it doesn’t work as well as creating a shortcut with this command in its target.  Here are the steps:

  • Create a user data directory of your choice
  • Create a shortcut on your desktop pointing to the Chrome.exe app
  • Right-click on it, select the Shortcut tab and on the Target field, add this to the end of the existing field: –user-data-dir=”C:/Chrome dev session” –disable-web-security.  The Target field should look something like this:
     "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --user-data-dir="C:/Chrome dev session" --disable-web-security

One last and important thing: make sure to close all Chrome instances before clicking on the newly created shortcut to open Chrome with disabled web security.

I hope this helps.

EF

Entity Framework: is it in memory or persisted? (Ah-ha Moment # 1)

I am sure that a developer’s life is full of “aha moments” or epiphanies. Somehow your thinking process gets stuck on something and you are convinced that that particular process works in a certain way, then the code breaks and you hit a wall, but after a good dose of re-education, some “stackoverflow” and/or Googling  the “Ah-ha!!!” moment happens.
Well, I thought that documenting some of those moments would help me to remember and hopefully help you to find your answers faster. So, here I am with the first post of the “Ah-ha Moment” series.


While working on this particular application, I needed a method to move an object from a section to another. Here is what I created:

public void MoveItemToItinerary(OptionalDetail optDetail, Guid DestinationMasterId, EMSDataModelContainer db)
{
   ItineraryDetail itiDetail = db.ItineraryDetails.Create();
   itiDetail.ItemID = optDetail.ItemID;
   itiDetail.TypeId = optDetail.TypeId;
   itiDetail.OrderId = db.ItineraryDetails.Where(d => d.ItineraryMasterId == DestinationMasterId).Max(o => o.OrderId).GetValueOrDefault() + 1;
   Guid ItiDetId = Guid.NewGuid();
   itiDetail.ItineraryDetailId = ItiDetId;
   //Removed for brevity

   db.ItineraryDetails.Add(itiDetail);
   //Some other operation not important in this case
}

The object needs to be added at the bottom of the “ItineraryDetails” collection, to do so I needed to add 1 to the highest OrderId in the collection (OrderId is an Int).  To get the highest number I used the EF method Max with the fluent API.
Now, this works just fine, I was happy and committed the code for production.  Then I get a call from one of the users saying that the application was messing up the items order.  Why??

Turns out that this method can also be called within a loop, in case multiple items are moved at once.  And I noticed that in that case the Max OrderId number kept on being the same in all iterations;  therefore, all added items got the same OrderId!!?? Problem.

Why was that happening?? After some Googling and some Julie Lerman @julielerman (see her Pluralsite courses here and here) I realized that the SaveChanges() of the DBcontext was only called at the end of the iteration so each newly added item was not persisted to the DB.  That is a problem since the call for the Max OrderId is actually checking the DB and not the entities in memory.
And there is my “Aha moment”…..I completely disregard the newly added entities not yet persisted to the DB.  I needed to make sure that the call to Max would include such entities. EF comes to the rescue with a nice keyword: “Local”. If I include this in the call to Max EF will check the memory copy of the collection and not the DB:

itiDetail.OrderId = db.ItineraryDetails.Local.Where(d => d.ItineraryMasterId == DestinationMasterId)
.Max(o => o.OrderId).GetValueOrDefault() + 1;

Great! Let’s run the app and see if it works…….and it doesn’t. With this change the first iteration will return 0 + 1 which is not correct if the actual collection already has items in the DB.

This makes total sense as now we are only checking the local copy of this collection. Well, then I need to load in memory the collection before checking the Max OrderId. To do this I just need to add the following line of code with the beautiful “.Load” method :

db.ItineraryDetails.Where(d => d.ItineraryMasterId == DestinationMasterId).Load();

EF is smart enough to only load the entities that are not already in memory, so in the case where my method is called within a loop, I don’t risk to overwrite any entities.

Now it works perfectly, EF loads the collection from the DB and the call to Max is local so also the newly added entities are accounted for, even before calling SaveChanges(). Here is the final code:

public void MoveItemToItinerary(OptionalDetail optDetail, Guid DestinationMasterId, EMSDataModelContainer db)
{
   ItineraryDetail itiDetail = db.ItineraryDetails.Create();
   itiDetail.ItemID = optDetail.ItemID;
   itiDetail.TypeId = optDetail.TypeId;
   db.ItineraryDetails.Where(d => d.ItineraryMasterId == DestinationMasterId).Load();
   itiDetail.OrderId = db.ItineraryDetails.Local.Where(d => d.ItineraryMasterId == DestinationMasterId)
.Max(o => o.OrderId).GetValueOrDefault() + 1;

   Guid ItiDetId = Guid.NewGuid();
   itiDetail.ItineraryDetailId = ItiDetId;
   //Removed for brevity

   db.ItineraryDetails.Add(itiDetail);
   //Removed for brevity
}

So, my Ah-ha Moment this time was, “Make sure you load the DB entities in memory before running any logic that utilizes data from the whole collection”.

DevQuest

Where it all began…

A few years back I inherited a brand new web application.  It was commissioned by a desperate company looking to speed up its work flow, they wanted to be more efficient and deliver proposals worthy of the product they were selling.  The assumptions behind this application were of being versatile, fast and comprehensive; it was supposed to be developed in a .net environment based on the new MVC framework (cutting edge at the time) with the latest bells and whistles, but instead…

When I was first exposed to the app, what I noticed were the eternal post-back times and URLs with .aspx extensions, which means it wasn’t a MVC app.  Ding, dong…alarm… nothing looks like the assumptions!!!  At the same time I kept on hearing frustrating remarks coming from the company employees: “…come on! Load it already!  I need to make a call!  I need to send this proposal!…..” and so on.

So I started diving into the source codes and there it was, an old fashioned Web Form application with any possible little logic handled by the server.  Things like input a quantity on a line item in an invoice and wait for the server to calculate the total to then add the item to the invoice item list and finally save the invoice.  All clicks were post-backs — a nightmare!

An overall rewrite was needed but budget and timeline did not allow it, with very little resources and the need to speed up all processes A.S.A.P., I had to compromise and try to transition the app page-by-page, section by section and module by module, where possible.

This process (still ongoing today), despite being quite challenging, turned out to be full of interesting topic of discussion for any developer in the process of transitioning from a legacy application to something more modern (I know….what is “modern” today in our field? …. new frameworks and cutting edge trends are already old the moment they hit the market….).  Therefore, I decided to document as much as possible in this blog.  And so emaquest.net is born, a Chronicle of an evolution, a developer quest.

I hope you’ll enjoy…