SQLServer

SQLServer

Monday, June 13, 2011

Creating my first Development Environment For SharePoint 2010 Part 1:The Dilemma

The first time I started developing Performance Point solutions inside of SharePoint 2010 for a client I knew that to get beyond proficient and into the realm of expert I was going to have to spend a lot of my own time creating reports, KPIs, scorecards and dashboards.  To do that I needed an SharePoint environment to call my own.  I tried and successfully installed SharePoint 2010 on my laptop running Windows 7 using the steps outlined here.  But it was slow going, I only  have 3GB of RAM on my laptop. Since I was traveling so much for work it just had to do. 

Then I was asked to tech edit SharePoint Business Intelligence 24 Hour Trainer and suddenly my laptop just wasn’t going to cut it anymore. For one thing I was going to have to uninstall and reinstall a couple of times just to properly edit the book.  For another   I have a more powerful machine at home (quad core and up to 8GB of RAM) and I wanted to install Windows Server 2008 R2 on it. But its my family’s computer and was still running Vista and I didn’t want to wipe all the programs I had installed on it. 

While talking about my problem with Marc Stacey (@MarkGStacey ) suggested that I create a virtual Hard Disk for setting up the Windows Server 2008R2 environment. Even though I trust Marks opinion I wanted to understand a little more before I ran off and tried to implement a vhd. The first thing I wanted to know is what are the advantages of using a vhd? The most significant advantage in my opinion is not having to edit your current partitions. Another is that you are not limited on the number of vhd files on your computer except to the extent of the amount of storage you have. You can also create incremental vhd files having one based on another.

So now that I had a possible solution it was time to try and implement it.  So next time I’ll go over the steps I took to set up the vhd. Until next time, GO GIANTS

Saturday, June 11, 2011

Inspired to run

I used to be in shape,  I used to be a runner, not anymore.  Now I’m a father, a husband, a BI Consultant for Pragmatic Works, and seriously out of shape.  But something happened this week that made me think that I could give back some of the pounds that I’ve gained.  You see I’ve started using Twitter for professional reasons, and I follow a lot of people it the SQL Server community and I noticed a bunch of tweets going around about #runwithsteve and @way0utwest.  In a nutshell Steve Jones just crossed the 1000 straight days of running.  That is a seriously impressive feat.

They say that Imitation is the sincerest form of flattery, so here I go.  I’m not going to try to run for 1000 straight days, I’m just going to try and run everyday, one day at a time, one mile at a time.  I’m starting today.

Here are the parameters I’m setting for myself.

  1. Dressed appropriately for the run, meaning shorts t-shirt and running shoes'
  2. Must run at least 1 mile straight.  Cannot breakup the run into 1/2 in the morning and 1/2 at night
  3. Even if I do another form of exercise that day, I must still run.

That is it.  Its not complicated, it just takes a commitment, and in part that is what this blog will be my commitment to myself to post once a day with the results of my run for the day, with a link to map my run.

I also plan to schedule some races here and there just to keep things interesting and measure my improvement. The big one will probably be the Half Marathon in Jacksonville in 2012 @262withDonna, with some random 5k and 10k races along the way.  If you have any tips or encouragement you want to pass along, please feel free to leave a comment.  Wish me luck and follow my progress here.

Friday, June 10, 2011

Task Factory Trim Plus Transform

I’m beginning to realize that I really like to do things the easy way.  Of course I always want things done right, but the easier the better, which is why I am always thankful that I get to use the Pragmatic Works Software Suite while I am developing SSIS Packages.  Task Factory is very helpful in the mission to make my life easier.  It has 26 tasks and growing with every release.

One of the consistent problems that I run into while developing ETL packages is cleansing the data.  Sometimes strange or unwanted characters creep up in the data or leading and trailing spaces need to be removed before landing the data in a new table, and Task Factory has a transform that can help you quickly get rid of the unwanted characters or space called the Trim Plus Transform.

The steps for configuring this transform are quick and easy.

Connect your source data to the transform.

image

Double click to open up the transform editor.

image

Now you can select what actions you want to perform on each column. There are several actions to choose from in that drop down.

  1. Trim all leading and trailing white-space characters

  2. Trim all leading white-space characters

  3. Trim all trailing white-space characters

  4. Trim specified characters from start and end

  5. Trim specified characters from start

  6. Trim specified characters from end

You can specify what characters you want to trim in the last 3 actions. Certain characters are difficult for one reason or another but the transform is built to handle them using an escape sequence (e.g. Tab [\t], CarriageRerurn [\r], LineFeed [\n], VerticalTab [\v], SingleQuote [\'], DoubleQuotes [\"], NullCharacter [\0])

image

You can also specify if you want to replace or add a new column as the out put action.

If you look at the figure above you can see that I’ve set the FeetInches column I am choosing to get rid the extra double quote (“). The data looks like this going in.

image

then using the \” to indicate the “ as the character I want to remove I run the package and here is my result.

image

As you can see it removed the unwanted characters and now my data is ready to land in my table.

If you get a chance take a look at Task Factory it has 26 transforms designed to assist your development.

Well that’s all I have time for today, since its Friday. Have a good weekend and good luck in your development adventures.

Task Factory Address Parser Transform

Have you ever received a list of customers or clients that you want to use to do a mailing from but when you received it the address fields were not even close to being useable? Well then I have then I have some great news: Task Factory has a transform that can do this work for you with some very impressive results. The Task Factory Address Parse Transform will take your address data from its current for and separate it out into the standard Address Line 1, Address Line 2, City, State, Zip and Country fields. On top of that it is very simple to use.
Take a look at this set of addresses:
clip_image001
The time it would take you to parse out the addresses from a few thousand row could be better spent doing… well just about anything else (sleeping would be my choice, or maybe watching sports center). Using the Address Parse Transform will help free up that time. And here is how you configure it
clip_image002
clip_image006
Once you have connected your source data to the Task Factory Address Parse Transform you need to tell it what the address column or columns are. That’s right, it doesn’t need to be in one column it can be in multiple columns and it will still separate the address into the standard fields. This first example will use one column. Select the appropriate column from the dropdown.
Once that is done select the Address Quality Tab.
clip_image008
Once parsed the addresses are assigned a quality by the Address Parser Transform in two sections of the address.
· Address Line 1 and Address Line 2
· City, State and Zip
There are three levels of quality that can be assigned to the two sections of an address
· High
§ For address line 1 and address line 2, high quality indicates that the address is correct
§ For city, state and zip, high quality indicates that the city, state and zip correct
· Medium
§ For address line 1 and address line 2, medium quality means the address is complete but missing a part of the address like apartment number or street suffix
§ For city, state and zip, high quality means either the city, state or zip code is missing.
· Low
§ For address line 1 and address line 2, medium quality means the address is not verifiable.
§ For city, state and zip, high quality means more than one of the city, state or zip code is missing.
When the address parse runs it will parse out the data and assign the level of quality based on whether it can find a valid address and a valid city, state and zip according to the rules above. You can either include all of the address as part of the output from the transform or you can choose "Include only high quality parsed addresses." By selecting "Include only high quality parsed addresses" the output named "Parsed Address Output” will contain addresses that are considered high quality. You can then output the other rows to a separate table that will require more analysis, or simply disregard them all together.
clip_image010
You can also specify that some Null handling here as well.
Finally on the Parsed Output Tab you can specify how to handle the newly parsed data. You can:
§ Add New Column
§ Replace Column
You can also specify the data type and the length of each column as well as the code page. Finally you can also specify the casing of the data.
clip_image012
Now just hook it up to your destination and watch the Task Factory Address Parser Transform go to work.
clip_image014
If you get a chance take a look at Task Factory it has 26 transforms designed to assist your development. Good Luck to you all.