Category: problems

Adding Parameters To Excel SQL Query

By , December 13, 2019 12:19 pm

The other day, I was asked to make an Excel spreadsheet that was populated by a SQL stored procedure parameterized. This spreadsheet is supposed to run each month to analyze the previous month’s sales and commissions. I wrote a stored procedure that automatically pulled the previous month’s data. Now, they wanted to specify the Start Date and End Date.

I searched all over the internet on how to do this, and while I am not sure that my solution is the most efficient, I decided to write this up to help anyone who might have this problem, or if there is a better way to do it.

The report is called Monthly Financial Analysis and pulls data from an SQL database stored procedure I wrote called usp_GetEndOfMonthSalesMaster. This stored procedure will accept a Start Date and an End Date, but if not supplied, automatically selects the previous month’s data.

In the Excel spreadsheet, I created on Query that called the stored procedure. To get the parameters, I created a second sheet that only contains the parameters. I created a named Table that encompassed the parameters.

Parameters Table

PowerQuery uses the Power Query M formula language to create the query in the Advanced Editor

In the my Query, I used the Advanced Editor. I started by selecting the Parameters table as a Source and then got to the data via the Table.TransformColumns command. Once I had the Extracted data, I set variables dtStartDate and dtEndDate via the Date.ToText function, as illustrated below so I could include them in the SQL Query.

PowerQuery M formula language query

Calling the stored procedure takes place in the Sql.Database connection command. After specifying the server and database name. In the options of the method, you can provide the Query text. This is where you can use the date variables.

This took me a while to figure out. I didn’t even know there was a PowerQuery M formula language to use so I had to try to understand how that worked. If anyone has a better way to do this, please provide comments so we can all learn from your expertise.

Dealing with corrupt WebConfigModification entries.

By , June 30, 2011 3:38 pm

Recently, I came across a problem.  I had created a new SharePoint Designer Custom Action and I created a feature solution package to deploy it to the production farm.  In testing it in a dev environment, I kept getting errors from the code I was using to make the appropriate web.config entries.  A custom SPD workflow action requires an ‘authorizedType’ entry in the web.config.  I have code in my feature receiver that will create a WebConfigModification entry in the WebConfigModifications collection and then use the SPService to apply the modifications.  (MSDN).  My problem was that someone had created a corrupt entry in the collection and it kept erroring out before executing my change.  It took a little understanding of how this works to find the solution to the problem.

The WebConfigModifications are a persisted object. That means that they remain until they are operated on.  In essence, it is a persisted queue.  Each entry is related to a SPWebApplication and added to the queue.  To remove the offending entry, I had to identify which web application the entry was registered with.  I just started going through every web application in my farm.  You could iterate through each application but in my case, I just hard-coded the URL for each application.  Here is the code that I used to clear the corrupt entries:

string siteUrl = @"http://localhost:44689";
SPSite site = new SPSite(siteUrl);
SPWebApplication webApp = site.WebApplication;
SPWebService service = SPWebService.ContentService;

I created a console app and put the code above in the Main() method. I kept executing the code using different web application URL’s until it executed without any error. Once I didn’t get an error, I was pretty sure that I had cleared the bogus entry. The next time I ran my feature code, it executed flawlessly!

A couple of notes. Obviously, this code is meant to be executed on the server and not a client. My dev machine has a single WFE. I would think that the best place to execute this would be on the same machine where Central Admin resides. Also – this was for a problem on a MOSS (WSS3) install. I am sure that it works the same in SharePoint 2010.

You can find the correct URL’s in IIS for each web application on your server. In my case, it was a shared development machine so there were many web applications executing under many different ports.

It Happens To Everyone

By , February 22, 2011 4:32 pm

Unexpected Error at


 A co-worker pointed this out to me.  Seems like someone was making a change to without testing and got the Unexpected Error!  I’m sure it was unexpected! 

The site was down for only a little while but it gave both of us a chuckle. 

Thanks Matt!

A(nother) Business Case for Twitter

By , September 2, 2010 11:29 am

I can’t see why some companies continue to block Twitter access to their employees.  Especially developers.  For me, Twitter has been an invaluable resource.  A wealth of 140 character wisdom.  An constant supply of what is new in the SharePoint (or insert your favorite topic here) world.

This example just happened to me yesterday.  I am working for a client that wants me to write a SharePoint 2010 workflow using K2’s Blackpearl.  Believe me, I am excited about using Blackpearl but it is a learning curve and I want to be productive.   Normally, I would just fire up Visual Studio and start writing a workflow.  K2 promises that Blackpearl can write complex workflows without code.  I wanted to see if I could figure it out on my own.  I read through the “My first workflow” instructions and thought I understood what I needed to do.  I started working through my own requirements when I got stumpted.  I knew what I wanted to do, I just couldn’t figure out how to tell Blackpearl to do it!

Finally, I sent out a Tweet asking this simple question:

I received a reply asking for my contact information and later last night – I received a detailed email from Holly at K2 telling me how I could accomplish this.  I searched Google, Bing without any hint on how to do this.  It didn’t help that I wouldn’t have understood what I was reading anyway.  Holly’s instructions were clear and easy to understand.  We’ve exchanged a couple of emails and she has suggested ways I could fulfill my whole requirement.  I am going to try this but I was so pleased that I wanted to get this post out here so that other companies would think twice about allowing their developers to use tools like Twitter.

Removing OPTIONS with jQuery

By , May 19, 2010 3:41 pm

Here’s a usability solution that was necessitated by my desire to present a logical, workable solution to my end-user.  I’m still learning jQuery so my solution may not be the most elegant – but it works. I welcome any improvements!

The Setup

I have a SharePoint site that is displaying the results of an SQL stored procedure.  I’m using Quest Web Parts for SharePoint – qSIListView part which is part of their fine System Integration web parts package.  The great thing about these web parts is that I can create a stored procedure on a database and then use the Filter capabilities of the qSIListView to supply the stored procedure with the parameter values.  It’s a pretty slick solution and very easy to configure.  My problem  is that when you use the filter fields as the Parameter input fields, only the “Equals” operator has any meaning.  I can select “Begins with” or “Contains” but it will only just pass the parameter value and not the operator.  That means that my End User will be presented with a lot of false options.  To avoid the confusion, I would like to present the user with one operator – “Equals”, or in the case where I’m trying to do some wild card matching in SQL, I would remove all but “Contains”.  You get my point.  The problem is that the Web Parts do not have the ability to filter out the operator selections.  I’ve submitted this change idea to Quest for future versions but in the mean time, I needed to find a solution.  My solution was to use jQuery.

Click to enlarge.


There are many posts out on the web about using jQuery to manipulate the SharePoint interface.  One of the best sites for jQuery on SharePoint can be found on  The solution I am talking about today can be used in any web environment, including SharePoint.  Essentially, my short script removes all <option> elements from all select boxes that match a certain text value.  This effectively narrows the choices presented to the User to only the valid choices, Exclude from search and Equals.

To do this, I placed a Content Editor Web Part (CEWP) onto my page and pasted my script in it.  In the script, I use jQuery to select all <OPTION> elements of the page and remove them if they contain any of the invalid choices.   The script is pretty straight forward.  I have jQuery stored in a script library on my SharePoint site.  This library provides read-access to all users on the site.  It is also a common place to store all of my scripts.  A good post on creating a script library can be found on here.

Once I load the document, I have search for all matching <OPTION> elements and remove them.

<script src="" type="text/javascript"></script>
<script type="text/javascript">
$("option").remove(":contains('Does not equal')");
$("option").remove(":contains('Starts with')");
$("option").remove(":contains('Ends with')");
$("option").remove(":contains('Is empty')");
$("option").remove(":contains('Is not empty')");

That’s it!  The end result looks like this.

Click to enlarge.

As I said earlier, this is a pretty simple solution to a problem that I was having.  Now, every time I use filter fields to supply parameter values, I can be assured that I have presented my User with valid options.

I welcome any feedback!

XSL:split-string function

By , February 24, 2010 2:59 pm

One of the reasons I like developing for SharePoint is that I get to work with many different technologies and platforms.  When I am designing a custom list for (display, edit or new), There are times where I may have a delimited string in a field that I want to display differently on the screen.  Since the forms are all XSL style sheets, it is helpful to have a few XSL templates to process the data.

I wrote this XSL template so I could wrap some HTML around any element in a delimited list.  This is specifically geared towards XSL 1.0.  In XSL 2.0, I would most likely use tokenize.  I’m not an expert at XSL so if anyone has any suggestions that would improve this template, please leave comments.

<xsl:template name="split-string">
<xsl:param name="list" />
<xsl:param name="delimiter" />
<xsl:param name="id" />
<xsl:if test="normalize-space($list)">
<xsl:variable name="newlist">
<xsl:when test="contains($list, $delimiter)">
<xsl:value-of select="normalize-space($list)" />
<xsl:value-of select="concat(normalize-space($list), $delimiter)"/>

<xsl:variable name="first" select="substring-before($newlist, $delimiter)" />
<xsl:variable name="remaining" select="substring-after($newlist, $delimiter)" />
<!-- This is where you need to put the display code -->
<xsl:attribute name="href">./Attachments/<xsl:value-of select="$id" />/<xsl:value-of select="$first" /></xsl:attribute>
<xsl:attribute name="target">_blank</xsl:attribute>
<img src="~/_layouts/images/doclink.gif" width="16" height="16" alt="" border="0" />
<xsl:value-of select="$first" />
<!--  end display code -->
<xsl:if test="$remaining">
<!-- I put a little display code here also -->
<br />
<!-- end display code -->
<xsl:call-template name="split-string">
<xsl:with-param name="list" select="$remaining" />
<xsl:with-param name="delimiter">
<xsl:value-of select="$delimiter"/>
<xsl:with-param name="id"><xsl:value-of select="$id" /></xsl:with-param>

You can add or remove variables depending on what you need. In the above example code, I had a semi-colon list of email attachment file names that were attached to the list. I wanted to create a ‘clickable’ link to each file, which is why I needed the ID field. Normally, you wouldn’t need the ID field.

This is how I called the template in my XSL:

<xsl:call-template name="split-string">
<xsl:with-param name="list"><xsl:value-of select="@EmailAttachmentNames" /></xsl:with-param>
<xsl:with-param name="delimiter">;</xsl:with-param>
<xsl:with-param name="id"><xsl:value-of select="@ID" /></xsl:with-param>

A SharePoint Design Question

By , February 4, 2010 1:10 pm

Normally, I write solutions to problems.  This time around, I need some suggestions to get ideas on how to design a solution to a problem.

The credit department where I work accumulates/produces documents to support the credit limit they grant our customers.  We have categorized them as :

  • Financial Statements
  • Agency Reports
  • Parent Guarantees
  • Other

We have defined a content type for each of these types of documents.  With each of these documents, one common field is the Company Name field that describes the company that the document refers to.

The Company is unique in it’s structure.  Each Company can be a Parent company or it can be the Parent to other companies.  I created a Company list that had the fields Name and Parent where Parent was a lookup field type to the Company list.  We currently have the list populated by about 4500 company names with the parent relationships defined.

List: Company

  • Company Name, Text Field (required)
  • Parent, Lookup of Company List (Company Name field) (not required)

In the document list that contains the four content types, each content type has a Company field that is a lookup to the Company List.

When we were discussing how to design this “application”, the user talked about how he would like to have a view where he could see all documents that related to a company, whether these documents applied directly to the company or one of it’s subsidiaries.  Another requirement would be that the documents for one particular company would be easy to find/view.

As for volume, each company (so far at 4500) will have at minimum 1 of each document types but more than likely will contain 10 – 20 documents with a small growth rate.  There is no document disposition being considered.  From their standpoint, once a document goes into the system, it becomes a historical record for that company.  So – we’re looking at about 45,000-90,000 documents initially and it will grow.

My original thought was that to stay within the 2000 item soft limit per view, I would create a folder in the Document Library for each Company created in the Company list.  I would then have the users save their documents in the folder that corresponded to the company name.  To achieve the view of the data where they could see all documents that applied to the parent and child companies, I would use the relationships defined in the Company list to create a custom view using the DataView Web Part.

So – here are my questions:

  1. Do I need a separate folder for each company?  Am I limiting myself?
  2. If we just use a Company Name field using the lookup list, would we run into performance problems?
  3. If we do use the folder, is there a way to automate the creation of the folder when I create a new Company in the Company List.
  4. If we do use folders, is there a way to prevent documents from being created outside of the folders?
  5. Can a document inherit a property from it’s parent?  (i.e. could a document get it’s company name by being saved into a specific folder with the right company name?
  6. (If folder) How do I prevent users from saving documents into the document library outside of a folder?

I’m sure there are more questions that I need to be asking.  It has been so long since I worked in Designer or the Web interface.  I’ve been living in Visual Studio for the last year.  Leave comments / remarks / insults.  Anything is appreciated.

Notes to SharePoint Migration

By , January 15, 2010 3:33 pm

The company I work for purchased the Quest Notes Migrator for SharePoint.  Overall, it has saved me lots of time migrating from Lotus Notes.  The main reason is that I have absolutely zero experience with Notes.  Now, the Quest tool allows me to migrate all of the rows into SharePoint pretty painlessly.  I was having one problem.  As I migrated Lotus databases, I was getting SharePoint lists with 50 -100 list items.  Performance lagged when trying to view those items and it made SPDesigner almost unusable.  I set out to find a way for the tool to create folders around a Created date.  My final solution utilized Lotus @Functions.

In the spirit of trying to save someone else a lot of time, here is what I did. 

  1. I used Quest Notes Migrator for SharePoint Designer.
  2. If you select a folder type in the mapping, Designer will create a folder with the name of the item.  If the item has a forward or backward slash, it will create nested folders.  So, I wanted a nested folder based on a date.
  3. In the Notes data definition, I created a new formula field.
  4. The formula that I used is:  @Text(@Year(@Created)))+”/”+@Text(@Month(@Created)))+”/”+@Text(@Day(@Created)))

    DateString Formula

  5. I created an alias of FolderDateString
  6. Output type is String.
  7. On the DataMapping, map the FolderDateString field to a Folder type. That will tell Designer to automatically create nested folders based on the string.
  8. That’s it!

Hope this helps someone!

mapped data

Final data mapping

Exporting List Data

By , November 13, 2009 10:05 am

If you have list data that you want to use in another application, what are some ways to get to that data? That was the question posed by a recent requirement from a client. Fortunately, SharePoint presents us with some creative ways to get to the list data.

  1. Export to Excel. This is a good, and quick, way to export data out of SharePoint so that you can use it in other programs. Once you have it in Excel, you can further export it into different formats. The most common is as a comma-delimited text file. I am not an Excel expert but it is quite easy to Export To Spreadsheet from the Actions menu and then save the Excel file in, say, a CSV format. From there, many applications can import that text file. But what happens if one of the SharePoint fields are large text fields that could contain commas and line feeds?
  2. Export to Access. For quite a few data export requirements, I tend to use this way. Access has better import/export functionality. I can save the exported list data in many formats. How do you do this? From the Actions menu, select Open With Access. [Note: this isn’t available in Document Libraries]. From there you can decide whether you want to maintain a local copy or just link to the list. Once you have it open in Access, you can export it in a number of different formats.
  3. If your third part application can call Web Services, SharePoint exposes a lot of functionality via Web Service.  Web Services are beyond the scope of this article but you can find plenty of information on MSDN, like Windows SharePoint Services Web Services.

What are some of the ways you deal with the requirement of external applications using list data?  I would love to hear your comments!

Why does making a Lookup site column an indexed field break my CAML Query?

By , August 28, 2009 9:50 am

I created a custom timer job definition that processes email-enabled list items. Since these list items were created from email, I extract certain email header information from the email and save that information as meta-data.

One of those site columns is called EmailSource. This is a Lookup field using the Title field of the EmailSource list as the value. My timer job queries the main list to find all list items where the EmailSource IsNULL and then I extract information from the email headers, and then find the appropriate EmailSource from the EmailSource list and write it to the EmailSource field in the main list.

It has been working pretty slick for the last month or so. Recently, I thought that I would try to improve the response of the main list views by setting the EmailSource site column as an indexed column. Since most of the views are created filtering on the EmailSource field, I figured that this would be an ideal candidate for indexing. As soon as I set the EmailSource site column as an indexed column, my query for IsNull started returning 0 (zero) items, even when I could see that there were list items that didn’t have an EmailSource value.

My query is:

      <FieldRef Name='EmailSource' />

Now – normally, when the EmailSource field is NOT set as an indexed column, this query would return all list items where EmailSource didn’t have a value. As soon as I set EmailSource as an indexed column, the query would always return zero items, even though I knew it should return some list items.

As soon as I removed that field from the indexed columns list, the query began to work correctly again.

So – I ask you. Why would setting a field as an indexed column break a query?!!!!

Panorama Theme by Themocracy