Friday, May 6, 2011

Should I RESTify my RPC calls over HTTP?

We have HTTP webservices that are RPC. They return XML representing the object the either retrieved or created. I want to know the advantages (if any) of "restifying" the services.

One thing I see is that we don't need representations for EVERY resource and we don't need to support all operations (GET, PUT, POST, DELETE) on all resources either. Basically my question is this.

Convince me that I should be using restful services instead of RPC over HTTP and what would those restful services should be?

From stackoverflow
  • For one its all about semantics, a URI is a Uniform Resource Indicator. HTTP provides Methods to GET, POST, PUT, DELETE a resource. HTTP headers specify in which format i want to recieve or send the information. This is all readily available through the HTTP protocol.

    So you could reuse the same URL you use for HTML output to get XML, JSON in a way that HTTP was meant to be used.

    XML-RPC and SOAP are based on calling methods that are described by an XSD or WSDL file whilst REST is based on getting/modifying resources. The difference is subtle but apparent. The URL soully describes the resource and not the action as is often the case with SOAP and XML-RPC.

    The benefits of REST are that you can utilize HTTP Verbs to modify a resource as suppose to a Method call that could be named create/new/add etc etc, Meaningful HTTP status codes instead of different kinds of error responses & Being able to specify different formats on the same resource in a standard way.

    You also don't have to accept ALL the verbs on a RESTful resource, for example if you want a read-only resource just return a 405 status code Method Not Allowed on any verb which isn't GET.

    Should you redo your RPC calls to REST ? No i dont think so the benefits don't outweigh the development time. Should you learn REST when setting up a new Webservice ? Yes i personally do think so, consuming a REST resource will feel alot more natural and can grow much more rapidly.


    Why I feel REST wins over XML-RPC/SOAP is that when developing websites you already aggregate all the neccessary data for the output to html, you also write validating code for POST bodies. Why should you change to a different protocol just because the transport markup changes ?

    This way when you design a new website (language agnostic) if you really think of URI's as resources you basically use your URI's as method calls with the HTTP Verb prefixing the method call.

    I.e A GET on /products/12 with an HTTP Header Accept: application/json; basically (imaginary) translates to getProducts(12,MimeType.Json).

    This 'method' then has to do a couple of things

    1. Check if we support Json as a mimetype. (Validate Request)
    2. Validate Request Data
    3. Aggregate data for product 12.
    4. Format to JSON and return.

    If for some reason in the next 4 year YAML is going to be the next big craze and one of your consumers wishes to talk to you in that way this MimeType is plugged in alot more easy then with regular webservices.

    Now product 12 is a resource you most likely also want to accept html mime-types on to display said product but for a URI like /product/12/reviews/14/ you don't need an html counterpart, you just want your consumers to be able to post to that URL to update(PUT)/delete(DELETE) their own review.

    In thinking of URI's strictly as resources, not just a location of a web page, and these resources in turn combined with the HTTP request to method invocations on the server side leads to clean (SEO friendly) urls and (more importantly?) ease of development.

    I'm sure there are frameworks in any language that will automatically do the mapping of URI's to method invocations for you, I can't recommend one since I usually role out my own.

    ASP.NET MVC also works on the same principle but in my opinion doesn't produce RESTful URI's. ASP.NET MVC makes the verb part of the URI by default having said that it's good to note that by no means does ASP.NET MVC force this (or anything for that matter) upon you.

    If your going to choose a framework in the very least they should:

    1. Bind URI's to methods on the server
    2. Support Object to JSON/XML etc serialization, it's a pain if you have to write this yourself although dependant on the language not neccessary all to difficult.
    3. Expose some sort of type safe Request helpers to help you determine what was requested without parsing the HTTP Headers manually.
    Mike Pone : This is good. Can you comment more on why REST is better for new webservices over XML-RPC?
    Martijn Laarman : Sure. I feel like I ranted a bit but I hope it explains my position better.
    Martijn Laarman : I see your a JSP developer, I have heard good stories from Jersy form people who've used it to implement REST with JSP. It also supports JAXB to bind XML/JSON requests automatically to objects.
  • Try taking the verbs out of your URLs:

    Wahnfrieden : URI construction has nothing to do with REST
  • Query strings shouldn't be used for accessing a resource in a hierarchical, non-query manner. Query strings are usually ignored when caching, which is dangerous and slow.

Detect when UITableViewCell goes off the screen


I'm implementing a rich UITableView with custom created UITableViewCell, I show these on the screen in one fashion, but once they go off the screen I want to take a note of that, since the second time they come on I would like them to get displayed in a different manner. Think auto "mark as read" when going off the screen.

I've been looking for some way to detect when a cell goes off the screen (get's dealloc'ed or dequeued or equivalent), preferably in the :UITableViewController class to make a quick note of the [indexPath row], but in the :UITableViewCell is equally as good.

I haven't been able to do this in any standard way ... counting the times it appeared seems out of the question as I do multiple reloadData calls on the table.

Anyone any ideas? This seems a bit tricky :)

From stackoverflow
  • I think I would try periodically checking the indexPathsForVisibleRows property of the UITableView. From the largest index path, you can deduce that all previous rows have been scrolled past.

  • I think you could use the

    - (NSArray *)visibleCells

    method for your UITableView. This returns an array of all cells that are visible. You can then mark any data that is "not visible" (i.e. not in this array) in the way you want, such that when you scroll back to it, it has been updated.

    Hope that helps

  • Are you sure a cell going offscreen is exactly what you want to catch? If you want to mark items as read, this does not seem like a proper way to do it. For example, I might scroll though the table really fast, and I would be very surprised if you marked all of the stuff as read.

    As for the technical part, simply keep a list of cells that are on screen (cellForRowAtIndexPath should add cells to that list), and in scrollViewDidScroll delegate method check if any of them are no longer visible.

    Another possible idea: I remember there is prepareForReuse method on the cell. Not sure when it is called, though.

How to set an ints Hex Literal from a string,

Im attempting to load a hex literal from an xml settings file, I can parse the xml just fine and get the required string from the file,

but i cant seem to get it to set an int variables value :/


    int PlayerBaseAddress = System.Convert.ToInt32(ConfigLoader.GetSetting("PlayerBaseAddress"));
    // Input string was not in a correct format.

    public static string GetSetting(string Val)
       // This loads from the xml file, Pretend its hardcoded to return a string of 0x17EAAF00

    int PlayerBaseAddress = 0x17EAAF00; // This works.
From stackoverflow

JavaScript to Save As with current date

I have a time sheet form named Timesheet Paperless.pdf I send this form to an employee and tell them to fill their name and employee number in the form. I then ask them to go to File>Save As and name the form INTIALS_TS.pdf . I now would like to a have button with in the form that will open a Save As dialog with the filename INITIALS_TS CurrentDate.pdf under My Documents/Work Files/Time Sheets/. So every new pay period they would click the save button on the form to save the file under the time sheets folder with their intials and the current date the button was pushed. How can I do this in JavaScript?

From stackoverflow
  • Sorry, there's no way to do that with JavaScript.

    The closest solution that comes to mind is open a link to a server-side script that outputs the PDF along with a content-disposition header.

    Content-Disposition: attachment; filename=INITALS_TS.pdf

HTML/JSP Doubt, Print date on webpage...

I made a function which displays date on the webpage,, and i uploaded the same onto some server...

But when i changed the date of my system, i noticed that the date is dependent on client machine..

Is it not possible to get the date from actual time server and embed that code into my program..

Hope i am able to explain my doubt.. Please give some solution to this with small sample code..that would be helpful..

Thanks a lot..

From stackoverflow
  • You've written a javascript function which obviously is dependent on the date set in the client

    AGeek : yes, i have written a javascript function which is dependent on date set in the client,, but i want that to be independent of client.. if client changes his date on the system then that would not reflect it on webpage,, is there any logic to solve this...
  • Depending on your server-side language, just use it's date() function which gives the current date/time based on the server clock. Next, format that info and send it to the client as a set value.

    For instance, in php:

    echo date();
    AGeek : i have tried this,, it does not take the time of server, rather it takes the time of client machine.. and i have tested this by putting a code on a server, and accessing it to my computer (with date and time changed). but it reflects the date of the client rather server.. so i need to know some website which offer this date etc..
  • Initialize the function with date from the server

    var d = new Date(<%= new SimpleDateFormat("yyyy, M-1, dd").format(new Date()) %>);

    AGeek : giving error... not working!!
    R. Bemrose : What's the error?
    fc : Have you imported `java.text.SimpleDateFormat` and `java.util.Date`
  • If you have written it in javascript, well...that always executes on the client side. If you are calculating date through javascript, its too late, that code is gone.

    To solve this, you would have to make your js function receive data through parameters, and that data should be calculated on the server side.

    You could do something like.

    <%@ page import ="java.util.Date" %><%--Imports date --%>
    <% Date date = new Date();
       String strdate = date.toString();//could be formatted using SimpleDateFormat.
      <!--must be inside a form -->  
      <input type="text" value="javascript:showDate('<%=strdate%>');"/>
      <!--must be inside a table-->

    Or more elegantly, get server date in your java class, and write it to request:

    //formattedDate is defined above, in the format you like the most. Could be a 
    // or a String

    And then, in your jsp, using for example, JSTL

    <c:out value="${formattedDate}"/>


    <% //this java code is run on the server side.
        String strdate = (String)request.getAttribute("date"); 
    <%=strdate%><!-- prints strdate to jsp. Could put it in a table, form, etc -->

    EDIT: In response to your comment, you should:

    <%--Imports java packages --%>
    <%@ page import ="java.util.Date" %>
    <%@ page import ="java.text.SimpleDateFormat"%>
    <%-- Java code --%>
    <% Date date = new Date();
       Calendar calendar = Calendar.getInstance(TIME_ZONE).setTime(date);
       SimpleDateFormat sdf = new SimpleDateFormat("MM/dd/yy");
       String strdate = sdf.format(calendar.getTime());
     <!-- Does not need to use javascript. All work is done on the server side.-->

    I have no idea what your time zone is, but I'm sure you do. Calendar.getInstance() takes an instance of TimeZone as a parameter. That should do it

    Take a look:

    Interesting link about JSP

    AGeek : i was not able to run your example accurately, but i can imagine ur program will give me the time of the server on the webpage, if you can write the code complete,, using inside a table will do...
    AGeek : hi sir,, your code worked.. but one problem now i am facing is that the server is based at US, and i am from diff. country so time not matching,, not thought of this before,, so can i code something or get the time from some other website everytime the page loads, and comes up with my country's time..
    Tom : You should take a look at the java.util.Calendar class, particularly at the method setTimeZone()
    AGeek : plz specify some example of TIME_ZONE.. lets say, how to write the timezone for GMT+3.5 in the programm..
    Tom : From TimeZone Class, see link above (TimeZone.html) Hours must be between 0 to 23 and Minutes must be between 00 to 59. For example, "GMT+10" and "GMT+0010" mean ten hours and ten minutes ahead of GMT, respectively.
    AGeek : Thnx a lot sir,, i solved my doubt.. All because of u, i achieved it.. thnx..

Hidden features of msbuild

I have an interest in msbuild this week. I'm cleaning up a lot of extremely complex build scripts. Digging in surprises me with how much it can do - msbuild is sort of a hidden feature of .NET programming in itself.

In the SO convention that questions must have answers, in a few days or a week, I'll mark the most useful or coolest hidden feature(s) as accepted.

   let bestAnswer suprise slick useful = (surprise + slick + 2*useful)

Definition of useful: I'm updating existing msbuild scripts that: package (zip files) websites and utilities, CC.NET integration, launch tests (UT + selenium), build databases. I'm adding (new targets, even more useful): deploy to VMWare virtual servers, chained builds (fast build immediately, queue slow tests). If you refer to an external library (like MSBuild community tasks), it would be nice to know how to get it.

Some msbuild surprises I've already found.

  • Hello world using the Message task and Properties.
  • Using msbuild as an installer for an extremely complex server product. MSB community tasks managed IIS server setup. The WriteLinesToFile and XmlUpdate tasks wrote server specific configuration files. If you've work with MSI, you'll know that anything is better than MSI for installation.
  • For newbies: CSProj and Vbproj files are the same as msbuild "proj" files. To edit directly: Unload your csproj or vbproj, then right click project and select edit. This is nicer and more powerful than working with clunky pre-build / post-build events.
  • MSBuild comes with the generic .NET installation. Unlike other fancy tools, you can use it on a totally clean server / desktop.

Here is msbuild Hello World After I wrote it, I found the MSDN hello world.

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build;Test" xmlns="">
  <Target Name="Hello">
    <Message Text="Hello, $(Who)" Importance="high" ></Message>
  <Target Name="Build" DependsOnTargets="Hello"/>
  <Target Name="Test"/>
From stackoverflow
  • MSBuild has a number of nice features. I like

    recursive file specs

    <Files Include="$(src)\**\*.cs" Exclude="$(src)\**\*test.cs" />

    Batching and Item Metadata

     <F Include="SampleApplication.t">
     <F Include="SampleApplication2.t">
    <F Include="SampleApplication3.t">
    <Target Name="Build">
    <Touch Files="%(F.FullPath)" AlwaysCreate="True" 
            Condition=" '%(F.Version)' > '1' ">
    <Output TaskParameter="TouchedFiles" ItemName="CreatedFiles"/>
    <Message Text="Created files = @(CreatedFiles)"/>
    <Message Text="%(F.Identity) %(F.Version)"/>

    Target level dependency analysis

    <Target Name="Build"
               Outputs="@(MyItems -> '$(MyItems)\%(filename).dll'">
    Marc Gravell : The recursive ** is a trick I use for sharing code between frameworks (CF/Silverlight/.NET/etc) - very useful.
  • You can reference one msbuild file from within another. All of our targets, such as those for running NCover, SourceMonitor, Duplo, etc. are within a common targets file. For each project, we create an msbuild file with a PropertyGroup and ItemGroup section, followed by an include to the common targets. This guarantees that all of our builds will run the same set of analysis tasks and save us time writing the scripts.

    • I have found the MSBuild Extension pack to be incredibly useful. The documentation is very well organized and easy to find the info you need.

    • They have a section in configuring intellisense for build files, that can be found here

    • Attrice has an incredible tool that I use often if I need to work on build scripts. What makes it worth your while to try it out, is that it has a debugger that will show you the dependent tasks, as it executes your build script, with auto's and watch variables while it is running the build script. Microsoft Build Sidekick v2.3

    • Setting SVN to quiet, this feels to me to have increased the speed of the build process a lot. Adding the following to your MSBuild.Community.Tasks.Subversion.SvnExport will run the build without logging each and every file that it gets out of SVN

      Arguments="--force -q"

  • Use the /M command-line parameter to enable usage of all available CPU cores.

  • Hi, This is not really a hidden feature but I think that batching is very powerful when understood.

    For more information you can read my related blog entries at:

Question on modifying this custom iterator

I have the below code, which iterates over a list based on a custom algorithm:

        public static IEnumerable<TItem>
MakeCustomIterator<TCollection, TCursor, TItem>(
this TCollection collection, // Extension method of the collection used (eg List<T>)
TCursor cursor, // An array of ints which holds our progress
Func<TCollection, TCursor, TItem> getCurrent,
Func<TCursor, bool> isFinished,
Func<TCursor, TCursor> advanceCursor)
            while (!isFinished(cursor)) // While we haven't reached the end of the iteration......
                yield return getCurrent(collection, cursor);
                cursor = advanceCursor(cursor);



     var matrix = new List<List<double>> {
     new List<double> { 1.0, 1.1, 1.2 },
      new List<double> { 2.0, 2.1, 2.2 },
      new List<double> { 3.0, 3.1, 3.2 }

     var iter = matrix.MakeCustomIterator(
     new int[] { 0, 0 },
     (coll, cur) => coll[cur[0]][cur[1]],
     (cur) => cur[0] > 2 || cur[1] > 2,
     (cur) => new int[] { cur[0] + 1,
       cur[1] + 1 });

            foreach (var item in iter)


When I use this code, it will get 1.0 and then 2.1 (diagonally below, in the next list). Is it possible to go from 1.0 to 1.1 in the first link? Or possible to go vertically down from 1.0 to 2.0?

Note: This code snippet is from Accelerated C# 2008.


From stackoverflow
  • You just need to change the idea of what "advancing" means (potentially along with what "finished" means). For example, to just go "down" you'd use:

    var iter = matrix.MakeCustomIterator(
         new int[] { 0, 0 },
         (coll, cur) => coll[cur[0]][cur[1]],
         (cur) => cur[0] > 2 || cur[1] > 2,
         // Increase the row, but stay on the same column
         (cur) => new int[] { cur[0] + 1, cur[1] });

    To go "along":

    var iter = matrix.MakeCustomIterator(
         new int[] { 0, 0 },
         (coll, cur) => coll[cur[0]][cur[1]],
         (cur) => cur[0] > 2 || cur[1] > 2,
         // Stay on the same row, but increase the column
         (cur) => new int[] { cur[0], cur[1] + 1 });

    It would be trickier to go "along then down" but feasible:

    var iter = matrix.MakeCustomIterator(
         new int[] { 0, 0 },
         (coll, cur) => coll[cur[0]][cur[1]],
         (cur) => cur[0] > 2,
         // Stay on the same row if we can, otherwise move to next row
         (cur) => cur[1] == 2 ? new int[] { cur[0] + 1, 0 }
                              : new int[] { cur[0], cur[1] + 1});
    dotnetdev : Thanks for that!

How do you add an icon overlay to folders icons in Source Control Explorer?

I am interested in marking folders as Active, Archived, and Released in Source Control Explorer so it is easier for the team to see which branches are active. I am somewhat familiar with VSX so once I know where to start I can make progress, but I need help with where to start.

An example of what I want to do is VisualSVN which places a status of the files in Solution Explorer. I am unsure if TFS will allow me to set a property on the folder so that if the folder is moved that status will move with it. If I have to I could create a file called BranchStatus.xml and read that file to set the icon.

If you know which namespaces and VSX/DTE objects I should look into on MSDN please let me know. I appreciate the help.

From stackoverflow
  • The simple answer is no. The TFS team confirmed there is not extensibility for the Source Control Explorer in VS2008. There may be this option in VS2010.

  • In response to your comment:

    Just as an idea. Using the command line you can setup and modify workspaces for anyone (if you have sufficient privileges). I thing it is also be possible to cloak a branch for other workspaces (ie your co-workers).

    The stuff could still be accessed if needed, but would not be automatically retrieved.

    Not as good as what you were going for, but it could work...


  • What we have decided to do is move the branch into a folder named Archive once we release the next version. This way everyone knows it's just there for historical reference.

    Brennan : This is what we chose to do on our team as well. In VS 2010 there may be options to set a status on the folder/branch but there will not be any extension points like exist in the Solution Explorer. :(

SVN Server with Apache can let me access my files anywhere! Firefox seems otherwise

I just installed Visual SVN Server and I just found that I can access my files using plain browsers like IE or Chrome.

I can even stream an mp3 file from my home to my work that way... or access any files at home if i forward the port to the Visual SVN server...

although i noted that IE 8 will show a red background for the current URL... so the link is not easily readable... also Firefox simply refused to open the page, reporting that the certificate is self-signed... any one know to make Firefox 3 be able to open it?

From stackoverflow
  • When you get the screen that says "Secure Connection Failed", follow these steps:

    1. There should be a link that says "Or you can add an exception...". Click on that link.
    2. Then click the button that says "Add Exception..."
    3. Click the "Get Certificate" button
    4. Make sure the "Permanently store this exception" checkbox is checked
    5. Click the "Confirm Security Exception" button.

    At this piont you should be prompted for your credentials, and everything should work fine.

  • At the bottom the page talking about the certificate, there should be a link that says "Add an exception." Follow that link, click on "get / view certificate" (sorry, going from memory), then "add exception."

    Essentially, it's just complaining about the SSL certificate's origin. You can get rid of the red background in IE8 by adding the certificate there as well.

  • If you're using Windows Authentication to your repositories, you must use https and not http to access those with an svn client. For security reasons, neon (the default DAV library subversion clients use) has SSPI (the windows authentication) disabled for non-encrypted connections.

    Bob The Janitor : Visual SVN uses https, the problem his is having is the secured server certificate isn't signed

cols, colgroups and css :hover psuedoclass.

I'm trying to create a table to display an individual's BMI.

As part of this, I'd like, on :hover, for the <tr> and <col> (or <colgroup>) to be highlighted also, in order for the intersection to be more apparent.

As the table will feature both metric and imperial measurements, the :hover doesn't have to stop at the cell (from any direction) and would, in fact, be a bonus if it extended from one axis to the other. I'm also using the XHTML 1.1 Strict doctype, if this makes a difference? example (the real table's...larger), but this should be representative:


tr:hover {background-color: #ffa; }

col:hover {background-color: #ffa; }



    <col class="weight"></col><colgroup span="3"><col class="bmi"></col></colgroup>






Am I asking the impossible, do I need to go JQuery-wards?

From stackoverflow
  • AFAIK CSS Hovers on TR's aren't supported in IE anyway, so at best the TR part of that will only work in Firefox.

    Never even seen a :hover work on a col/colgroup so not sure if that's possible...

    Think you might be stuck with a Javascript implementation.

    Theres an example here that works (rows & cols) in Firefox but again its broken in IE... cols don't work.

    David Thomas : That's definitely consistent with my experiences, if not quite the miracle I was hoping for... =)
    Kezzer : Depends on the doctype and the version. works in IE7 anyway.
    Nick Presta : The :hover state will not only work in Firefox, but every other major non-IE browser (Opera, Konqueror, Safari, et al).
  • There is a very decent jQuery plugin I've come across located here which does a very good job of this kind of thing with loads of examples. Preferentially I'd use that.

    David Thomas : Wouldn't it be nice, though, to be able to use CSS as it should -in my imagination be able to- be used? =) I'll check out the JQuery, thanks for that!
    Chad Scira : thats hover madness! +1

xp_sendmail: Procedure expects parameter @user

I am getting xp_sendmail: Procedure expects parameter @user when I run xp_sendmail.

How do I associate a given SQLService login profile (SQLMail_Whatever) to outlook?

From stackoverflow
  • Here's how I got it working with SQLServer2005 (need to use sqlmail for backward compatibility):

    1. Login to the machine running the sql server with the login profile you intend to use for mailing
    2. Configure an outlook account for that specific account
    3. Setup SQLServer and SQLServer Agent services to startup using the same account
    4. From SqlSever Management-->Legacy-->SQLMail setup and test outlook as profile name
    5. From SQL Server Agent-->Properties-->Alert System check enable mail profile and pick SQLMailand outlook in the dropdowns.

    After a service restart xp_sendmail will work.

Should an object searcher method be in the a parent object, or the same as the object beign searched?

Which constitutes better object oriented design?

Class User { 
   id {get;set} 
Class Office { 
   id {get;set}  
   List<User> Managers(){  }//search for users, return list of them

or this one

Class User { 
   id {get;set} 
   List<User> Managers(){ }//search for users, return list of them 
Class Office { 
   id {get;set}  
From stackoverflow
  • I personally like the first one. User is an entity not a collection. Office is the one that contains Managers.

    I probably also would create a UserList class.

    public class UserList : List<User>
    class User 
      public int id {get; set;} 
      public bool IsManager { get; set;}
    class Office {
        private UserList _users;
        UserList Managers
            get { return (UserList) _users.FindAll(x => x.IsManager);}
  • The first solution is the better one, because User does not/should not know how Office works and how to obtain a list of managers.

  • User john;
    List<User> managers = fred.Managers(); //get managers of this user
    Office london;
    List<User> managers = london.Managers(); //get managers of this office

    Unless it's a static method, make it a method of a class of which you have an instance: no point in making getUsers a non-static method of the User class, because then you'd need a user instance in order to invoke the getUsers method.

  • Similar to the other answers, I prefer the first solution. After all, what relationship does one user have to the collection being searched? How would the client get hold of a user to search with in the first place?

How to fix redirect OGNL error, Unable to set param?

I am not sure if this is me or if this is a bug.

I got the following error

11:52:01,623 ERROR ObjectFactory:27 - Unable to set parameter [dest] in result of type [org.apache.struts2.dispatcher.ServletRedirectResult]
Caught OgnlException while setting property 'dest' on type 'org.apache.struts2.dispatcher.ServletRedirectResult'. - Class: ognl.ObjectPropertyAccessor
Method: setProperty
Line: 132 - ognl/
        at com.opensymphony.xwork2.ognl.OgnlUtil.internalSetProperty(

And my config is pretty minimal

<package name="esupport" namespace="/esupport" extends="struts-default">
        <action name="old-esupport" class="">
            <result type="redirect">
            <param name="location"></param>
            <param name="dest">${dest}</param>

And my class has a pair of get/set method. And that's it. Nothing Fancy

I have found this thread in the forum. But it doesn't solve my problem

I am using

Struts 2.1.16 Spring 2 Spring Security + CAS

(The funny behavior is it sends me to the CAS server after the error, but I guess it will be corrected after the redirect issue got fixed)

From stackoverflow
  • it seems like a bug with Struts2. they recommend...hiding the error by:

    <category name="com.opensymphony.xwork2.ObjectFactory">
       <priority value="fatal"/>

    from....TroubleShooting guide section on redirects

  • For me it does "dest" comes out as null. There is a getter and setter for it in both the current action and the redirect action. What is the issue?

LDAP error in Tomcat - TLS confidentiality required

I'm trying to configure a Realm in Tomcat to access an LDAP server with TLS security. My basic Realm configuration looks like this:

    <Realm className="org.apache.catalina.realm.JNDIRealm" 
        userPattern="uid={0},ou=People,dc=nsdl,dc=org" />

I get an error like this:

SEVERE: Catalina.start: 
LifecycleException:  Exception opening directory server connection:  
    javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - TLS confidentiality required]
    at org.apache.catalina.realm.JNDIRealm.start(
    at org.apache.catalina.core.ContainerBase.start(
    at org.apache.catalina.core.StandardHost.start(
    at org.apache.catalina.core.ContainerBase.start(
    at org.apache.catalina.core.StandardEngine.start(
    at org.apache.catalina.core.StandardService.start(
    at org.apache.catalina.core.StandardServer.start(
    at org.apache.catalina.startup.Catalina.start(
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.apache.catalina.startup.Bootstrap.start(
    at org.apache.catalina.startup.Bootstrap.main(

I have tried a wide variety of fixes, without changing the problem or the error message. This includes:

  • changing the protocol in the connectionURL to "ldaps"
  • changing the port in the connectionURL to 636
  • adding protocol="TLS" tot he realm
  • moving the Realm declaration from conf/server.xml (under Host or Engine) to META-INF/context.xml in the webapp
  • adding ldap.jar to server/lib
  • changing from Tomcat 5.5 to Tomcat 6.0

Each of these produces the same error message (although the stack trace is different in some configurations).

Any ideas?

From stackoverflow
  • The answer is actually not related to the question as posted here. The problem was related to how the Realm was specified.

    The Realm was specified in a Context element contained in a file located inside the webapp at META-INF/context.xml

    What I didn't realize was that

    • Tomcat copies this file to conf/Catalina/localhost/{webapp}.xml,
    • If a newer version of the WAR file is given to Tomcat, it will not replace {webapp}.xml with the newer version

    So the reason that the same error message happened every time was because my initial copy of the Realm was bad, and my attempted fixes were not being considered.

    In fact, the Realm specification is correct as shown above. Once I cleared out the stubborn file, it worked fine.

controller parameter NULL when using JQuery POST and ASP.NET MVC

I have no problems processing JQuery GET requests in a controller, however I cannot get any form data to POST. The following client snippet:

//process result

combined with a controller:

public ActionResult Save(string name)
 return Json("Success!");

will result in a NULL value for the name parameter when inspected inside the action method, whereas I expected name to be mapped to the method parameter. Also all other objects (Request.Form) etc. in this context seem to be NULL. I can do this with a $.get but I think I am supposed to do any operations with side-effects with POSTs. I am using ASP.NET MVC 1.0, JQuery 1.2.6 and IE7.


Update: see my answer below and humble apologies

From stackoverflow
  • Could it be that the Save(string name) method is expecting stringified JSON? Try this:

    "{'name':'John'}", function(result){
    martijn_himself : unfortunately this doesnt work - thanks though!
  • It isn't so simple as making a json object and throwing it at an action.

    Start from here. People have written small scripts that get the JSON object dressed and ready for an action to read it in and map it to properties or arguments.

    martijn_himself : I'll read this; it seems though the link addresses how to pass JSON to action methods that expect complex custom types as parameters -which i'd like to do next!, but for now im simply trying to pass a string, which it seems should be straighforward - Thanks
    Mark Dickinson : Yeah, that's true. What is nice about passing JSON objects in is that you can write cool action filters to deserialize, and validate them. It's very good for AJAX with MVC. Hope it helps, thanks for your comment.
  • I believe your code should work, is your URL correct and your routes setup correctly? In addition you could always fire up Fiddler to see exactly what your request to the server is and if you are passing the correct items.

    martijn_himself : Thanks! ill try Fiddler. Ive used Web Development Helper in IE and it does seem to pass the right request body to the method.
  • Sorry guys, I had a $.ajaxSetup entry in the page which overrided the default contentType to application/json.

    When using the default contentType as follows:

    $.ajax({ url,
             type: "POST",
             contentType: "application/x-www-form-urlencoded",
             success: function(result) { alert(result); },
             data: { name: "John" }

    It works because processData is true by default which means the data entry with the JSON object will be parsed into a string (data: "name=John" also works).

    Sorry for wasting your time :) and thanks to Mark for the suggestion on passing JSON objects, ill do that next cause it seems very cool.

    Chuck Conway : Thanks for posting your fix.

TFS: How can I automatically close matching work items on successful build?

We are using continuous integration as part of our build automation. For every check in, the tfs build server builds the project and deploys to our web servers on success.

When the build fails, it automatically creates a new Bug with the details of the build failure.

Due to CI and the activity on the server, this might result in 10 or 20 failure work items before the build starts succeeding again.

So, I have two options. I'd like to either have the build process see if an open work item already exists for a build failure and just add details to that; OR, I'd like the build server to close all of the build failure items automatically when it starts working again.

Any ideas?

From stackoverflow
  • You can create a MSBuild Task to do either of these options. Here is a similar piece of code I use to get you started but since I don't know the details of your work item or process you will have to change it.

    This code takes all of the work items associated with a build and updates their status.

    If you select your first option you can just change the UpdateWorkItemStatus method and update any existing WIs. For the Second method you will need to do a bit more work as you need to look up the prior build rather than take it as a input.

    using System;
    using System.Collections.Generic;
    using System.Text;
    using Microsoft.Build.Utilities;
    using Microsoft.TeamFoundation.Client;
    using Microsoft.TeamFoundation.WorkItemTracking.Client;
    using Microsoft.Build.Framework;
    using Microsoft.TeamFoundation.Build;
    using Microsoft.TeamFoundation.Build.Client;
    namespace Nowcom.TeamBuild.Tasks
        public class UpdateWorkItemState: Task
            private IBuildDetail _Build;
            private void test()
                TeamFoundationServerUrl = "Teamserver";
                BuildUri = "vstfs:///Build/Build/1741";
            public override bool Execute()
                bool result = true;
                    TeamFoundationServer tfs = TeamFoundationServerFactory.GetServer(TeamFoundationServerUrl, new UICredentialsProvider());
                    WorkItemStore store = (WorkItemStore)tfs.GetService(typeof(WorkItemStore));
                    IBuildServer buildServer = (IBuildServer)tfs.GetService(typeof(IBuildServer));
                    _Build = buildServer.GetAllBuildDetails(new Uri(BuildUri));
                    //add build step
                    IBuildStep buildStep = InformationNodeConverters.AddBuildStep(_Build, "UpdateWorkItemStatus", "Updating Work Item Status");
                        Log.LogMessageFromText(string.Format("Build Number: {0}", _Build.BuildNumber), MessageImportance.Normal);
                        List<IWorkItemSummary> assocWorkItems = InformationNodeConverters.GetAssociatedWorkItems(_Build);
                            // update work item status 
                            UpdateWorkItemStatus(store, assocWorkItems, "Open", "Resolved");
                        SaveWorkItems(store, assocWorkItems);
                    catch (Exception)
                        UpdateBuildStep(buildStep, false);
                    UpdateBuildStep(buildStep, result);
                catch (Exception e)
                    result = false;
                    BuildErrorEventArgs eventArgs;
                    eventArgs = new BuildErrorEventArgs("", "", BuildEngine.ProjectFileOfTaskNode, BuildEngine.LineNumberOfTaskNode, BuildEngine.ColumnNumberOfTaskNode, 0, 0, string.Format("UpdateWorkItemState failed: {0}", e.Message), "", "");
                return result;
            private static void SaveWorkItems(WorkItemStore store, List<IWorkItemSummary> assocWorkItems)
                foreach (IWorkItemSummary w in assocWorkItems)
                    WorkItem wi = store.GetWorkItem(w.WorkItemId);
                    if (wi.IsDirty)
            // check in this routine if the workitem is a bug created by your CI process. Check by title or assigned to or description depending on your process.
            private void UpdateWorkItemStatus(WorkItemStore store, List<IWorkItemSummary> assocWorkItems, string oldState, string newState)
                foreach (IWorkItemSummary w in assocWorkItems)
                    Log.LogMessageFromText(string.Format("Updating Workitem Id {0}", w.WorkItemId), MessageImportance.Normal);
                    WorkItem wi = store.GetWorkItem(w.WorkItemId);
                    if (wi.Fields.Contains("Microsoft.VSTS.Build.IntegrationBuild") && wi.State != "Resolved")
                        wi.Fields["Microsoft.VSTS.Build.IntegrationBuild"].Value =_Build.BuildNumber;
                    if (wi.State == oldState)
                        wi.State = newState;
                        foreach (Field field in wi.Fields)
                            if (!field.IsValid)
                    if (wi.IsDirty)
            private void UpdateBuildStep(IBuildStep step, bool result)
                step.Status = result ? BuildStepStatus.Succeeded : BuildStepStatus.Failed;
                step.FinishTime = DateTime.Now;
            public string BuildUri { get; set; }
            public string TeamFoundationServerUrl {get; set;}
     < UpdateWorkItemState

Handling Deprecated Includes With Autoconf

I have been using ext::hash_map in a C++ project and notice that in g++ 4.3 it is deprecated in favor of unordered_map. I need to maintain compatibility with old systems that do not have unordered_map available. It occurs to me that this is just the sort of thing that could be handled by autoconf, which I am already using. I have been unsuccessful in finding documentation on doing so, however.

I would guess that I need to do something like the following:
- Replace all instances of ext::hash_map in my code with MY_HASH_MAP
- Replace all instances of ext/hash_map in my code with MY_HASH_INCLUDE
- Include a line in using some combination of AC_CHECK_HEADERS and AC_DEFINE

I have not been able to figure out exactly the proper autoconf magic to make this work, and am wondering if it is even the best solution anyway. Surely this is the sort of thing that many other people will have dealt with and there might be some standard solution that I have been unable to find?

So, three related questions:
- Is there a standard way of handling this that I should be using?
- How do I do this in autoconf?
- Should I just use -Wno-deprecated and assume the C++ standard is not going to be updated within my lifetime?

From stackoverflow
  • I would consider leaving the code as is and turning off the 'deprecated' warning instead, especially if you have to support older systems where you only have ext::hash_map available.

    IIRC ext::hash_map is not part of the standard anyway, so the main harm that could be done is that (eventually) the G++ maintainers will remove support for it. However if you rejig the code to include both hash_map and the tr1 unordered_map, you've suddenly doubled the testing effort for this particular bit of code. Unless there is a specific reason that might require you do duplicate the effort, save it for something more worthwhile.

  • You could use AC_CHECK_HEADERS([my_header_file]) to see which files are present -- then create a new class MyApp::hash_map which depending on how what defines are used wraps the functionality accordingly.

Django, jQuery, XMLHttpResponse error

I'm trying to learn some basic ajax using Django. My simple project is an app that randomly selects a Prize from the available prizes in the database, decrements its quantity, and returns to the page.

I'm using jQuery's $.ajax method to pull this off. The the only thing that's running is the error function defined in my $.ajax call, but the error message says nothing but "error". I'm new to ajax, so I'm probably overlooking something obvious. Here's the relevant code:


class Prize(models.Model):
    name = models.CharField(max_length=50)
    quantity = models.IntegerField(default=0)

    def __unicode__(self):


urlpatterns = patterns('',
    (r'^get_prize/$', 'app.views.get_prize' ),


def get_prize(request):
    prizes = Prize.objects.filter(quantity__gt=0)
    to_return = {}

    if prizes:
         rand = random.randrange(len(prizes))
     prize = prizes[rand]
     prize.quantity -= 1

     to_return['prize'] =
     data = simplejson.dumps(to_return)

     return HttpResponse(data, mimetype='application/json')

     to_return['msg'] = "That's all the prizes.  No more."
     data = simplejson.dumps(to_return)

     return HttpResponse(data, mimetype='application/json')


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "">
<html xmlns="">
<title>User's Conference Prize Randomizer</title>
<link rel="stylesheet" type="text/css" href="/static-media/style.css" />
<script type="text/javascript" src="/static-media/jquery-1.2.6.min.js"></script>

<script type="text/javascript">
 var get_prize = function() {
  var args = { 
   type: "GET",
   url: "get_prize",
   dataType: "json",
   success: done,
   error: function(response,error_string,e){
    alert( "Error: " + response.responseText + " " + error_string ); 
    for (i in e){


 var done = function(response) {
  if(response) {

  else {
   alert("Something boo-booed");

 $(document).ready(function() {

</head >

        <p><a href="" id='start'>Get Prize</a>, this link isn't working how I'd like.</p>


From stackoverflow
  • This is not a direct answer to your question, but have you looked at the jQuery taconite plugin? It makes most AJAXy stuff trivial to do. More on malup's taconite page. While you're at it, checkout his other excellent plugins.

  • Good grief. I had just left out return false in the function that made the ajax call. Way to go n00b. :0.

  • This isn't exactly related to your question, but if you're trying to fetch a random object from the database, you could do this instead:

    prize = Prize.objects.filter(quantity__gt=0).order_by('?')[:1]
    if prize:
        prize = prize[0]
    # Using a slice prevents an exception if the query returns an empty queryset.
    # If you're not a fan of LBYL, you could wrap the query in a try...except block instead:
        prize = Prize.objects.filter(quantity__gt=0).order_by('?')[0]
    except IndexError:
        # No more prizes pal...done, fini.

Zend Route Regex Reverse problem with forward slashes

Hi I have a problem with reverse for regex routes, my config file is below:

routes.route1.type = "Zend_Controller_Router_Route_Regex"
routes.route1.route = "([^\,]+)([^p]*)(?:\,page_[0-9]+)?\.html"
routes.route1.defaults.controller = "index"
routes.route1.defaults.action = "find" = "url_path" = "url" = "options" = "page"
routes.route1.reverse = "%s%s,page_%d.html"

the url will be,page_1.html the number of categories is unspecified current regex works fine, and gets all the categories at once, but the reverse formats all forward slashes to html format: %2F does anyone know how I can keep the forward slashes? I need the reverse for pagination and all the html entities look just plain ugly.

thx :)

From stackoverflow
  • If you're using the URL helper, set the fourth parameter to false to disable the encoding (which is on by default). So something like:

       'url_path' => 'whatever',
       'url' => 'something'
       'options' => 'foo',
       'page' => 'bar'
    ), 'route1', false, false)?>

Agile Methods Specifically taylored to working solo?

Most Agile Methodologies I'm reading about speak volumes about how best to keep communication bottle necks within a team to a minimum. When working as a solo developer, most of these don't really apply. Stand up meetings are "interesting" when done alone for example. :)

My question is, when working solo, from what particular methodology would a solo developer extract the most value?

From stackoverflow

  • Instead of choosing a particular methodology, I would look around at the practices in different methodologies and adopt those that help you be agile. For example, I use stories, though I keep them in a wiki instead of on story cards. The wiki works better for me since my customers aren't co-located and putting story cards up on a wall for just myself to see doesn't add enough value. I also use TDD, two-week iterations, and monthly release cycles. It ry to get the customer involved in writing my stories as possible and specifically talk to them about developing small units of functionality that I can deliver early and often.

  • I recommend one-man scrum, 'pomodoro'.

    Marnix van Valen : I tried the pomodoro technique this week and it really helps to keep focus. Great tip!
    CAD bloke : The PDF and other resources are now at
  • At least two possible approaches:

    • Scrum - use a personal task board whether with a physical whiteboard/corkboard, or an electronic version (I'm using and its very nice).
    • Kanban - I haven't tried it yet myself, but I intend to try it within a few weeks. The main idea here is that you don't manage iterations, you manage the flow of tasks/stories from the backlog to completion, with attention on not juggling too much at once. Kanban is great for allowing you to focus, avoiding too much multi-tasking, and is very light-weight in nature. a physical kanban board is very easy to use. (see specifically the portable kanban mentioned ni sounds like a nice idea for an individual.

    PS If anyone is aware of a good SaaS web-based electronic personal kanban (like is for Scrum) please let me know!

    MichaƂ Drozdowicz : Have a look at
  • Aside from the other great advice - keep a teddy bear at your desk. Every time you're struggling with a problem, explain it to the bear. Usually about half way through the explanation the problem will become apparent.

    tooleb : and if he starts talking back or interrupts too much - TAKE A BREAK!
    projecktzero : Sounds like the rubber duck method of debugging:
    kdmurray : This totally works. If you don't have a teddy bear try explaining it to your non-technical spouse or other family member. :)
  • I've been working on small solo projects for years and recently one of them has grown into a moderately large project. So I've been facing this same challenge.

    Simply by virtue of being solo, you'll be able to adapt what is the fundamental advantage of agile project management - the ability to change requirements on short notice.

    The other good thing about the agile method is it encourages you to have something 'buildable' / usable after each set period of time, say a week. Not that it necessarily does something useful, but you never have your app in an unstable state.

    The point is, that being solo, you have the flexibility to take the best parts of the various methodologies and adapt them to your needs. As long as you keep your self organized - try a free project & task manager , and a personal wiki, you'll benefit. No methodology by itself will help much if you're disorganized, or simply following it because it's the PM flavour of the year and not based on it's merits for your particular situation.

  • You need to consider Project Managers, Owners, Stakeholders, and Users as part of the team. I've actually been able to sit down with users while developing (esp. GUI, reports, workflow, logic, etc.).

  • The two agile practices that I get the most benefit from when working solo are

    • Keeping a prioritized backlog of work, and working through that list one, and only one, item at a time
    • Writing tests as I go (test-first when possible, test-last otherwise), and only checking in work when all tests pass

    The former keep me from losing focus and finding myself off in the weeds. The latter helps me move quickly and gives me a safety net for trying risky things.

    For one-shot, scratch-an-itch projects, I'll sometimes skimp on tests to little ill effect, but taking things multiple steps at a time is a sure recipe for burning up time.