Monday, April 25, 2011

How can I check the security of my website?

How can I check the security of my website?

What are the best ways to avoid security holes?

From stackoverflow
  • Javascript / SQL injections would be the first thing I'd safeguard against. Also keep in mind that there are possible holes if you let users upload files to the site. Also be wary of admin control panels (CPanel, etc) as they could be targeted to acts of brute forcing.

  • Check this question. And all the related questions in the right.

  • If you use any 3rd party libraries, make sure to be a subscriber to their news listings, and try be informed of their security notices.

    I saw an entire server taken out by a vulnerability in a 3rd party PHP library once which really was not pleasant. This may seem obvious, but you'd be surprised the majority of people don't do this, which is why the invasions are so effective. :)

  • If you don't have the skill/resources to do your own security testing, I would recommend WebInspect. I am not sure of the price, but this is the best tool I have tested for web security testing. A lot of tools such as Nessus won't really help in your case, because they are looking mainly at known flows in the web server or in known packages.

    benlumley : There are companies that offer this sort of testing on a regular basis as a service as well.
  • Ratproxy :

    A semi-automated, largely passive web application security audit tool, optimized for an accurate and sensitive detection, and automatic annotation, of potential problems and security-relevant design patterns based on the observation of existing, user-initiated traffic in complex web 2.0 environments.

    Detects and prioritizes broad classes of security problems, such as dynamic cross-site trust model considerations, script inclusion issues, content serving problems, insufficient XSRF and XSS defenses, and much more.

    You use ratproxy while you use application as normal and it highlights potential security flaws.

Can I install/update wordpress plugins without providing ftp access?

I am using wordpress on my live server which only uses sftp using an ssh key.

I want to install and upgrade plugins, but it appears that you are required to enter your ftp login to install the plugins. Is there a way to install and upgrade plugins by manually uploading the files instead of having wordpress handle the entire process?

From stackoverflow
  • Usually you can just upload your plugin to the wp-content\plugins directory. if you dont have access to this directory via sftp I'm afraid you may be stuck.

    ceejayoz : Yep, just drop 'em in wp-content/plugins.
  • WordPress 2.7 lets you upload a zip file directly (there's a link at the bottom of the plugins page) -- no FTP access needed. This is a new feature in 2.7, and it works for plugins only (not themes yet).

    D. Lambert : BTW, upgrading is even easier -- you'll see an icon indicating that a new version is available, and you click "upgrade" and let it do its thing. Very nice. Even the WordPress core is upgraded this way - I went from 2.7 to 2.7.1 w/o uploading anything.
  • We use sftp w/ ssh (on both our dev and live servers) & have tried (not too hard though) to use the WP upload feature. I agree with Toby, upload your plugin(s) to the wp-content/plugins directory and then activate them from there.

  • Use the One Click Plugin Updater.

  • I developed a plugin but worry that my users are not going to understand how to FTP it to wordpress (its a plugin for bloggers who aren't always tech savvy). I want the user to have the option of cutting and pasting the code and adding it to wordpress's sidebar.php. Is it possible to install a plugin that way?

  • It is possible to use SFTP or SSH to auto update Plugins in WordPress, but you need to have ssh2 pecl extension. You can find out how to do it, using the following tutorial

Mapping XPath 1.0 data types to java

I'm using XPath 1.0 to process incoming web services messages. This can be quite hard to get right if schema data types are used, because XPath 1.0 does not recognize them (XPath 2.0 does, but there seems to be no full open source implementation. Not sure if Saxon-B does this).

E.g., the literals "true" and "false" in a xs:boolean represent the boolean values True and False according to xml schema, but XPath 1.0 will evaluate both of them to True.

This means that evaluating /test against <test>false</test> actually returns True.

The same goes for other datatypes as well: "12.78e-2" is a valid value for xs:double, but evaluates to Double.NaN.

javax.xml.datatype contains type mappings for duration and dateTime, but that's it.

XMLBeans contains easy to use converters between java and schema's built-in data types:

Node n = jaxp13XPathTemplate.evaluateAsNode(expression, context);
boolean b = XmlBoolean.Factory.parse(n).getBooleanValue();

Are there any other tools that might be helpful (and no, I'm not looking for a full-fledged XML binding framework)?

From stackoverflow
  • Hi There,

    I know you said you don't want a fully fledged binding framework but have you looked at jibx. It can be a bit of a pain to write the binding files however you can generate the bindings from an XML schema is you have one and its so quick its untrue.

    As an alternative to using XPath have you considered parsing the xml into a dom which you could then manipulate?

    Karl

    sapporo : Karl, I haven't looked at JiBX closely so far. I'd rather avoid the byte code enhancement stuff if possible. Can you elaborate about your DOM idea? What exactly would I gain by having a DOM? Anyway, thanks for your input!
    Karl : If you want to process incoming XML its always preferable to get the XML into some type safe format normally an object to help simplify processing. DOM's are not type safe but depending on how complicated your xpath is you may find it easier to extract and manipulate the data: http://www.jdom.org/
    sapporo : I know about JDOM and friends, but I'm using XPath for a reason. That's why I asked about "Mapping XPath 1.0 data types to java" specifically, and not about XML processing in general.

How to make Zend IDE 5.5.1 to not bother about backslashes?

I use Zend IDE and quite often use Analyze code to quickly find undeclared or unused variables. As all PHP developers I also use regular expressions.

So main question is where to set a checkbox or tune config file to disable these warnings:

Bad escape sequence: \s (line NN)

Thanks for answers!

From stackoverflow
  • Why don’t you just correct the mistyped string declarations. If you have the regular expression foo\sbar, write it as:

    'foo\\sbar'
    "foo\\sbar"
    
    Mr.ElectroNick : Unfortunately no. That would be MY bug, but I'm getting these messages with something like preg_match("|^a\sb$|is",$a,$b);
    Gumbo : Replace it by `preg_match("|^a\\sb\$|is",$a,$b);` or `preg_match('|^a\\sb$|is'$a,$b);`. (See http://docs.php.net/manual/en/language.types.string.php)
    Mr.ElectroNick : Well... Sadly I was looking like a novice with this question. Working with PHP for 7 years :-) But. I was impressed having [preg_match("|^a\sb$|is","a b",$out)] and [preg_match("|^a\\sb$|is","a b",$out)] working but second one wasn't ague with analyzer. Thanks!
  • window->preferences->php->code analyzer->bug->bas escape sequence

    Mr.ElectroNick : There's no window menu item. I use Zend 5.5.1 where only following File, Edit, Search, Go to, Project, View, Debug, Tools, Help items.

SQL query: put values of an attribute as columns

I have a table errors(errorId, category, eDate).

I am trying to show as result the number of errors per week and categories. The query result should be something like:

Week    Category1    Category2
01      2            5 
02      0            20
From stackoverflow
  • The code I posted wasn't right. You need to look into Crosstab Queries in Access.

  • Access can't do pivots, as far as I know. In SQL Server you could do this with the PIVOT() construct.

    Edit: Actually, take a look at this.

    HLGEM : LOL Access did crosstab queries long before SQL Server had pivot

File encoding when reading a file with StreamReader

I am now having an issue where Celsius symbol gets read as C instead of °C.

Looks like the encoding the culprit. I tried to do this:

            using (StreamReader sr = new StreamReader(this._inFilePath,System.Text.Encoding.Unicode ,true))

instead of

            using (StreamReader sr = new StreamReader(this._inFilePath))

but I am now getting garbage....does the original file encoding have to match the StreamReader encoding? I am using compact framework 2.0.

I have found this online, but if use this I have read it all into a byte array, detect the end of each line,convert it to Unicode, and then proceed with a program logic. Anyone used this class?

From stackoverflow
  • Yes, you need to specify the correct encoding when you construct your StreamReader. .NET might be able to detect the encoding for you. There are overloads for the StreamReader constructor which take a boolean parameter you can use to request this behavior.

    public StreamReader( string path, bool detectEncodingFromByteOrderMarks)

    gnomixa : So basically, if I want the file to be read in Unicode I have to save it in Unicode first, correct?
    gnomixa : how do I do this???
    Robert Lewis : You need to use an encoding that supports unicode. Normally this is UTF-8 or UTF-16 (which is called "Unicode" in .NET). Be sure not to use ASCII or ANSI (Encoding.Default). You could post another question about saving files, and describe your situation more.

XCode crashes on startup

When I startup Xcode I get the following error:

The application Xcode quite unexpectedly. Clicking on the Report... button gives the following:

Process:         Xcode [875]
Path:            /Developer/Applications/Xcode.app/Contents/MacOS/Xcode
Identifier:      com.apple.Xcode
Version:         ??? (???)
Build Info:      DevToolsIDE-9210000~1
Code Type:       X86 (Native)
Parent Process:  launchd [140]

Date/Time:       2009-03-12 15:13:14.839 -0500
OS Version:      Mac OS X 10.5.6 (9G55)
Report Version:  6

Exception Type:  EXC_BAD_INSTRUCTION (SIGILL)
Exception Codes: 0x0000000000000001, 0x0000000000000000
Crashed Thread:  0

Application Specific Information:
objc[875]: '/System/Library/PrivateFrameworks/DebugSymbols.framework/Versions/A/DebugSymbols' was not compiled with -fobjc-gc or -fobjc-gc-only, but the application requires GC
objc[875]: *** GC capability of application and some libraries did not match
...

The full detail can be seen here. I've never used Xcode before (although I did run it when I first installed it). Any ideas on what could be causing this?

From stackoverflow
  • Download and install the latest version of Xcode. That's what I did to fix this issue.

  • If you just installed safari 4, either revert back to 3 or install the latest version of Xcode. Hope that helps

    Abdullah Jibaly : Yes, I did install Safari 4... Any help on installing the lastest version of Xcode (I already ran System Update)?
    Ben Reeves : http://developer.apple.com/technology/Xcode.html
    Abdullah Jibaly : Got it, Apple Dev Connection. Thanks!
    Robert S. : @Abdullah, this is precisely the situation I was in. :)
    mt3 : uninstalling safari 4 beta does not eliminate the problem; you must upgrade xcode

Fastest way to join mysql 4.0 data from multiple tables?

Hi, I have 3 mysql 4.0 tables: all have fields ID(int), type(int) and another field, value which is either varchar(255), tinyint or int.

I need to write them all out and I end up getting three DataTables, looping over them, and creating rows into a temporary table (in .NET 1.1).

Do you see any faster/cleaner way than this to join or just write out this data?

From stackoverflow
  • I am not sure if you are wanting to actually join or display the results from all three tables in one query.

    If you are just wanting flat out results, your best best would be to do a union such as:

    SELECT 
        ID, 
        Type, 
        Convert(varchar(255), Value) as Value 
    FROM 
        table1
    UNION
    SELECT 
        ID, 
        Type, 
        Convert(varchar(255), Value) as Value 
    FROM 
        table2
    UNION
    SELECT 
        ID, 
        Type, 
        Convert(varchar(255), Value) as Value 
    FROM 
        table3
    

    Note: I am doing the convert so that you can get the most stable form (the varchar version) of all three of your fields.

    Spikolynn : Thanks, actually it worked even without converting, but i'm worried because instead of 420 results I'm only getting 410 this way - maybe some are duplicates and are merged?
    TheTXI : @Spikolynn: it may be possible that the an implicit conversion is occuring in your ints and tiny ints (or your varchars are trying to convert to ints) and when it fails conversion it is getting dropped from the result set.
    Spikolynn : TheTXI: I tried with Convert(Value, char) instead of Value all three times, but it still shows not enough results. I don't mind much though, because the app should work the same if they are just duplicates as I presume.
    Spikolynn : yup, i checked for duplicates today and they were exactly as many as the missed records count. maybe i should add a unique constraint over the three fields
    Spikolynn : oh, and I had to use convert, because otherwise I would get an .NET exception about incompatible column types :S

How can I mount a remote directory on my computer?

I have all my code and data on my computer at work. But often I work from home, so I ssh into the work computer, connect to my GNU screen session (which includes a text-based emacs) and start hacking, compiling, etc.

This can get a bit annoying sometimes, especially if my connection is slow, and the remote emacs lags behind my typing. Also, it would be nice to use an IDE to work on the code.

Is there a solution? Sure, I could sync the code with rsync or svn, but then I think I have too many directories scattered around, lots of data that I don't want to copy around etc. And I always want to compile and run remotely (because the work computers are fast and plenty), but coding and typing itself would be better on my home computer.

Can I mount my remote home directory on my local computer? What do I need for this to work? Would I need my sysadmin for this? (I'd rather not, because he usually doesn't reply e-mails, or is unwilling to do anything that people ask him.) Can I do something with VPN? How do you do it when you work from home?

(As for the security restrictions, right now I can ssh and scp, but not telnet or ftp.)

UPDATE: I have a Mac at home, and Linux at work.

From stackoverflow
  • I have a similar situation and I use Netdrive. You may be interested in that!

    From the site : With NetDrive, managing your remote FTP and WebDAV servers will be as easy as any old file folders on your PC.

    Once you mount the local drive, you don't need to run an application or an FTP client interface but a simple drag-and-drop in your Windows Explorer will be sufficient to transfer and manage files.

    Update : Sorry just now read the update that you use mac.

    : +1 Some people might be interested in a windows solution anyway.
  • Check this Lifehacker article: Mount a file system on your Mac over SSH. As long as you have SSH to your workstation you do not need sysadmin for this. However, you need to install MaceFUSE and SSHFS on your Mac. Check the article for details.

    : +1 sshfs works great!

How do I enable saving of filled-in fields on a PDF form?

Some PDF forms can be saved, including all filled-in field data:can be saved

Some others can not be saved, and all filled-in field data are lost:can not be saved

How do I enable saving of filled-in fields on my PDF form?

Thanks!

From stackoverflow
  • You can use the free foxit reader to fill in the forms, and if you pay a little you can design the forms that way you want.

    You can also us iText to programmaticly create those forms.

    There are free online services that allow you to upload a pdf and you can add fields also.

    It depends on how you want to do the designing.

    EDIT: If you use foxit reader, you can save any form that is fillable.

    eleven81 : I have used foxit, and it does not have the ability to enable saving of filled-in fields on my PDF form.
    Milhous : We use version 2.3 at work and it allows our customers to save the fields.
    Milhous : You are correct that foxit cant "enable" saving for users using Adobe, but for foxit it works.
  • There is a setting inside the PDF file that turns on the allow saving with data bit. However, it requires that you have a copy of Adobe Acrobat installed to change the bit.

    The only other option is to print it to a PDF print driver which would save the data merged with the pdf file.

    UPDATE: The relevant information from adobe is at: http://www.adobeforums.com/webx?13@@.3bbb313f/7

    eleven81 : I have Acrobat 6.0 Professional installed. Where is that setting?
  • When you use Acrobat 8, or 9, to enable user rights on a PDF so that it can be saved with the data, it adds about 20 kb to the pdf.

    The other possibility is to use CutePDF Pro, add a submit button and have the XFDF data submitted to your self as an email or to a web server. The XFDF data can then reload the original PDF with your data.

SQL 2005 instance won't work with DNS

Basically I have a sql 2005 standard server with a named instance installed (server/instance1). I also have created a DNS entry (dnsDBServer) that points to the ip address of the sql server. A web application we have can connect using the following methods (ipaddress/instance1, server/instance1) but cannot connect using the dnsDBServer/instance1. Of course this sort of defeats the purpose of the dns entry. Was wondering if sql aliases would help fix this problem or if anyone has a solution. Thanks.

From stackoverflow
  • Most likely your DNS resolution isn't working exactly like you thought it would.

    Do a tracert from the machine that is trying to talk to your sql server and see where it's going to.

    You should also look at the firewall settings between the requesting machine and the server to see if there is anything else affecting it.

    Mike T : I can ping (and use tracert) the dnsDBServer and it points to the correct ip address. I am using the fully qualified domain suffix as well. I did notice that if I create a .udl on my desktop and connect using the dnsDBServer/instance1 it works just fine. That is using the SQL Native client.
    Chris Lively : Then post your code that performing the connection. Maybe we can see what the problem is with it.
  • Is there a domain mis-match? eg server.bob.net, dnsDBServer.fred.com?

    1. Use ping -a to see what the domain resolution is for both DNS alias and server name.
    2. Also, in the DNS settings of a client, check the "Appends these DNS suffixes (in order)" list.
    3. Can SSMS on the SQL box connect to dnsDBServer\instance1?
    4. tracert as suggested already
    Mike T : I am able to connect the dnsDBServer/instance1 from SSMS and from a .udl on my machine. Is there something in the provider I'm using? See above comment as well.
  • ha, I just realized that the web server (which is the same server the sql server runs on) didn't have it's own dns updated. I should have checked that first thing. I ran ipconfig /flushdns and it started working. It never dawned on me to check the server itself. Thanks for your help.

MailMessage Mask From Email address

I need to send a email from application with text in the from field. I have the from email address, but how do I mask the email address so the from message renders the text not the email address?

From stackoverflow
  • MailAddress (that you will use for 'To' and 'From' properties) can take a display name along with the address in its constructor. You need to verify how different mail clients use the value to display (most use a combination of both).

  • You can add a name to the sender this way, using the MailAddress object:

    MailMessage message = new MailMessage();
    message.From = new MailAddress("john.doe@foo.bar.com", "John Doe");
    
    Dedrick : MailMessage message = new MailMessage(); message.From = new MailAddress("John Doe", "john.doe@foo.bar.com");
    splattne : According to http://msdn.microsoft.com/en-us/library/1s17zfkf.aspx it's address first, then name

serialport.Write() - how to format bytes properly

I am required to talk back to the scale by making it PRINT programmatically.

The manual states that the print command is simply P ended with carriage return. I use the following code:

Byte[] bytes = {80, 13};
_sp.Write(bytes,0,bytes.Length);

But every other time I get ES code back from the scale which means error but it still prints. I just want to find out why even though it works, I am getting ES back.

Any ideas? Is the format of bytes[] correct?

From stackoverflow
  • The bytes are correct, according to what you say that the manual states.

    Perhaps the manual is unclear (or has this specified elsewhere), and by "carriage return" actually means the CR+LF newline sequence that is used in some systems (e.g. MS-DOS). The bytes would then be:

    Byte[] bytes = { 80, 13, 10 };
    
    gnomixa : thanks I will try that.
    gnomixa : I am getting ES code back on this every time...if I don't append 10, I get it once in awhile. Not sure why the damn balance not liking the format that's clearly specfied
    gnomixa : as i mentioned, i do get the weight sent back to the balance. But I would like to either swallow ES or understand WHY I am getting it back.

How to insert a value in a string at certain positions c#

I have a program that gets a string from a method. I want to know how to insert a string value to that string at certain positions.

For example:

mystring = "column1 in('a','b')column2 in('c','d')column3 in('e','f')";

Here, how would I insert the string value " and " after every occurance of the character ')' in mystring?

PS. If possible, also include how not to insert it right at the end.

From stackoverflow
  • Strings are immutable, so you cannot 'just' change the value of that string. Each modification that you want to make to a string, leads to a new instance of a string.

    This is maybe how you could achieve what you want:

    string s = " x in (a, b) y in (c, d) z in (e , f)";
    
    string[] parts = s.Split (')');
    
    StringBuilder result = new StringBuilder ();
    
    foreach( string part in parts )
    {
       result.Append (part + ") and ");
    }
    Console.WriteLine (result.ToString ());
    

    But maybe there are better solutions ...

    Anyway, how come you receive that string (which looks like a part of a where clause of a sql statement) in that way ?

  • Probably the simplest:

    mystring = mystring.Replace(")", ") and ");
    mystring = mystring.Substring(0, mystring.Length - " and ".Length);
    
    Frederik Gheysels : That I didn't think of it ... 8)7
  • You could accomplish this with replace..

    string mystring = "column1 in('a','b')column2 in('c','d')column3 in('e','f')";
    mystring = mystring.Replace(")", ") and ").TrimEnd(" and".ToCharArray());
    

    Resulting In:

    "column1 in('a','b') and column2 in('c','d') and column3 in('e','f')"
    
    Lucas : be careful with that TrimEnd(), i will trim ANY ' ', 'a', 'n', or 'd' at the end, not just " and".
    Quintin Robinson : Yes I should've mentioned that it works for the case in the question, but there could be unintentional side effects if the end of the string contains any of the characters in the trimend that aren't intended to be removed.
  • If you mean, and I'm taking your string literally and as it comes:

    mystring = "column1 in('a','b')column2 in('c','d')column3 in('e','f')"
    

    Then you could just do:

    mystring = mystring.Replace(")c", ") and c");
    

    Which would result in:

    mystring = 
        "column1 in('a','b') and column2 in('c','d') and column3 in('e','f')"
    

    This is presuming you don't want a trailing "and".

    HTH
    Kev

  • System.Text.RegularExpressions.Regex.Replace(
        mystring, "\\)(?=.+$)", ") and ");
    

    The .+$ portion of the regular expression ensures that the closing parenthesis is not at the end of the line. If you are going to be doing this often, I'd recommend creating and persisting a Regex object for the pattern.

    // Do this once somewhere:
    System.Text.RegularExpressions.Regex insertAndPattern =
        new System.Text.RegularExpressions.Regex("\\)(?=.+$)");
    
    // And later:
    insertAndPattern.Replace(mystring, ") and ");
    

    EDIT: Just realized I'm an idiot. Fixed the patterns above from "\\).+$" to "\\)(?=.+$)", so that the .+$ part is not included (and thereby replaced) in the match.

    zohair : Sorry...I have no idea what Regex is and what it does...
    NTulip : So because you don't know what regex is - you're just going to write off his comment? Wow.

JOIN Performance: Composite key versus BigInt Primary Key

We have a table that is going to be say 100 million to a billion rows (Table name: Archive)

This table will be referenced from another table, Users.

We have 2 options for the primary key on the Archive table:

option 1: dataID (bigint)

option 2: userID + datetime (4 byte version).

Schema:

Users - userID (int)

Archive - userID - datetime

OR

Archive - dataID (big int)

Which one would be faster?

We are shying away from using Option#1 because bigint is 8 bytes and with 100 million rows that will add up to allot of storage.

Update Ok sorry I forgot to mention, userID and datetime have to be regardless, so that was the reason for not adding another column, dataID, to the table.

From stackoverflow
  • Some thoughts, but there is probably not a clear cut solution:

    • If you have a billion rows, why not use int which goes from -2.1 billion to +2.1 billion?

    • Userid, int, 4 bytes + smalldatetime, 4 bytes = 8 bytes, same as bigint

    • If you are thinking of userid + smalldatetime then surely this is useful anyway. If so, adding a surrogate "archiveID" column will increase space anyway

    • Do you require filtering/sorting by userid + smalldatetime?

    • Make sure your model is correct, worry about JOINs later...

    R. Bemrose : As an alternative to the first bullet, how about an unsigned int, which goes from 0 to 4.2 billion?
    gbn : SQL Server does not have unsigned int :-)
  • What's with option 3: Making dataID a 4 byte int?

    Also, if I understand it right, the archive table will be referenced from the users table, so it wouldn't even make much sense to have the userID in the archive table.

  • I recommend that you setup a simulation to validate this in your environment, but my guess would be that the single bigint would be faster in general; however when you query the table what are you going to be querying on?

    If I was building an arhive, I might lean to having an autoincrement identity field, and then using a partioning scheme to partion based on DateTime and perhaps userid but that would depend on the circumstance.

  • Concern: Using UserID/[small]datetime carries with it a high risk of not being unique.

    Here is some real schema. Is this what you're talking about?

    -- Users (regardless of Archive choice)
    CREATE TABLE dbo.Users (
        userID      int           NOT NULL  IDENTITY,
        <other columns>
        CONSTRAINT <name> PRIMARY KEY CLUSTERED (userID)
    )
    
    -- Archive option 1
    CREATE TABLE dbo.Archive (
        dataID      bigint        NOT NULL  IDENTITY,
        userID      int           NOT NULL,
        [datetime]  smalldatetime NOT NULL,
        <other columns>
        CONSTRAINT <name> PRIMARY KEY CLUSTERED (dataID)
    )
    
    -- Archive option 2
    CREATE TABLE dbo.Archive (
        userID      int           NOT NULL,
        [datetime]  smalldatetime NOT NULL,
        <other columns>
        CONSTRAINT <name> PRIMARY KEY CLUSTERED (userID, [datetime] DESC)
    )
    CREATE NONCLUSTERED INDEX <name> ON dbo.Archive (
        userID,
        [datetime] DESC
    )
    

    If this were my decision, I would definitely got with option 1. Disk is cheap.

    If you go with Option 2, it's likely that you will have to add some other column to your PK to make it unique, then your design starts degrading.

    HLGEM : Plus I would never use a datetime in a PK unless there was trigger that it could never be updated atfter the initial insert. That's just an accident waiting to happen.
    bigint : great point on the uniqueness Rob!

Delete Directory from ASP.NET application returns to new session

I'm deleting a directory from within an ASP.NET application. The deletion goes fine, but when I return from it all my session data from before the delete is lost.
It doesn't matter whether I use:

                if (Directory.Exists(folderPath))
                    Directory.Delete(folderPath, true);

Or:

                System.IO.DirectoryInfo d = new System.IO.DirectoryInfo(folderPath);
                if (d.Exists)
                    d.Delete(true);

In both cases I lose my session data.

Has anyone run into this problem?

From stackoverflow
  • Is the directory within the same application? Then deleting it will cause an AppDomain restart, which will result in loss of session state.

    Lea Cohen : The directory is in a virtual directory in the application. Is there a way to get around the AppDomain restart?
    John Saunders : No, not that I know of. Directories within the web site should not be used for storage.
  • Yes! Deleting a directory IIS is serving, causes a reset (or something). I have had this problem, I redesigned the app to not delete directories.

    Shame on the -1 for the question, this is a real problem. +1 for someone with a fix.

  • If you are deleting a subdirectory within your application, your app domain will restart. This removes all session data. To alleviate this issue, only add/remove directories outside your application home directory.

How do you make one generic select method for your DAL ?

Scenario

You have an Assembly for Data Transfer Objects containing 10 classes that exactly represent 10 tables in your database. You generate / build a DAL layer that has methods like -

  DTOForTable1[] GetDataFromTable1();
  DTOForTable2[] GetDataFromTable2();

and so on....

Question

How do I make a method that hides the numerous methods to get data from each table from the client code? The method I would want for example in service or business layer could look like-

SomeGenericDTO[] GetDataFromTable(TableTypeEnum tableTypeEnum);
  1. How do I do this ? Is this even possible ?

  2. If yes for (1), is it a good practice ?

  3. If yes for (1) and (2) does this simplifies or complicate design ?

Thanks in advance.

From stackoverflow
  • You could define it like:

    T[] GetDataFromTable1<T>() where T:IDto
    {
       // you can know the table with the type of T
    }
    

    That said, I would rather either have the different methods or even classes to work with it. I use the repository pattern instead, I suggest to look for info on it.

    Perpetualcoder : I have read about Repository pattern but I dont think i understand it fully. Can you recommend some link ?
    Bill Karwin : http://www.martinfowler.com/eaaCatalog/repository.html
    Bill Karwin : In "Domain-Driven Design" by Eric Evans, chapter 6 contains info about the Repository pattern.
  • Here's how my DAL does it:

    List<entity> customers = SQL.Read(new SearchCriteria(), new Customers());
    

    To perform joins:

    List<entity> customers = SQL.Read(new SearchCriteria(), new Customers(new Orders(new OrderDetails())));
    

    The DTO class itself determines which table to access and its properties determine which columns to retrieve.

    I can't answer if it is a best or good practice. It is the practice that has been working for me for a long time. There are no extraneous methods such as "GetById", "GetAll", etc.

    Perpetualcoder : This is very intersting ! Can you provide some link or sample ??
    Otávio Décio : @Perpetualcoder - email me at ocdecio at gmail dot com.
  • It's very common these days to implement your concrete table classes as inheriting an abstract table-access class. The abstract class has generic methods to query a table. Each concrete class can declare which corresponding database table (also perhaps columns and inter-table relationships).

    Design patterns that help include ActiveRecord and Table Data Gateway.

Silverlight Error "Layout Cycle Detected Layout could not complete" when using custom control

I'm building a custom control in Silverlight by deriving from ContentControl and doing some special formatting to put a dropshadow behind the contents.

I've nearly got it working but have recently ran into a bizarre error. It works fine if it contains anything besides a Border, or a Grid/Stackpanel/etc that does not have an explicitly defined height and width.

I get a JavaScript error in IE, and the text says:

Runtime Error 4008... Layout Cycle Detected... Layout Could Not Complete.

If I specify a height and width on the contained grid/stackpanel/etc it works fine.

There is a ton on the web about this error when too many textboxes are used (over 250), but I'm able to reproduce my error with a single button in a grid.

I have no textboxes at all on the page. The error has to do with a detected infinite loop. I set a few breakpoints in the code and it seems that the "SizeChanged" event is getting called a lot during rendering, and each time the height/width increments by 10.

I'm assuming that setting a default height/width causes it to skip this incrementing of the number, but I have no idea why this error is happening.

Has anyone ran into this or have any ideas?

From stackoverflow
  • There is a good blog post on this error here.

    Bascially what can happen is your changing some size in a measureoverride somewhere which causes another measure which changes the size which causes a measure and so on. I ran into this once before and fixed it by removing any code that caused a layout update or triggered a layout update during the layout cycle.

    Matthew Timbs : This is the problem I'm seeing. The linked blog post suggests only doing your layout stuff on initial load and when the silverlight app (the containing user control) changes its size. This might work for me, but I've not yet tested it when doing animations and other actions. Thanks for the help!
  • A common cause is handling SizeChanged and then in the handler doing something that affects the size of the element. Sometimes this is not obvious - it could be modifying child elements which affect the size of their container for instance.

How to I set the cursor to the Drag Copy/Move cursors in Win32?

It doesn't seem to be one of the standard cursors (like IDC_ARROW), so how can I load this?

From stackoverflow
  • You need to add a custom cursor as a resource to your application, get the resource handle, and then use SetCursor(...)

    The drag/copy/move cursors aren't part of the standard library - your application will need it's own. The standard cursors are all listed here:

    http://msdn.microsoft.com/en-us/library/ms648391(VS.85).aspx

    That being said, IDC_HAND does exist on newer operating systems, which may be what you are looking for...

    wchung : Darn, we're deploying for multiple windows platforms including Vista/XP, so I guess we would have to bundle icons for each?
    Reed Copsey : Not necessarily. You can use teh same icons on XP + Vista. Vista does add the ability to have higher quality icons, though, so if you want them to look as nice as the other icons on vista, you'll need multiple resources.
    wchung : Sorry, I meant cursor, not icon -- since vista/xp seems to have different plus/move cursors. Thanks though.
    Reed Copsey : Yeah - same issue, though. You'd just need to load multiple icons into your resource file, and pick the one you want.

Getting ASP.NET Mvc and Web Forms to work together

Hello,

Sorry if this as already been asked.

I am currently working on a small feature and am trying to implement the ASP.NET Mvc framework into my current Web Forms web application. I have been following the Professional ASP.NET 3.5 Mvc Chapter 13 pdf document that I recently found on stack**overflow** to get Web Forms and Mvc to work together. I have completed all three steps:

  1. Added the references to the libraries System.Web.Mvc, System.Web.Routing, and System.Web.Abstractions

  2. Added the two directories to my web application: controllers and views

  3. Updated the web.config to load the three assemblies mentioned in step one and registered the UrlRoutingModule HttpModule. I also added the initial routing settings to my Global.asax file

    RouteTable.Routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Support", action = "Index", id = "" } );

Once I try and run my newly created page following the ASP.NET Mvc framework I get a NullReferenceException on the following piece of code:

<%= Html.Encode(ViewData["Message"]) %>

In my controller I have ViewData["Message"] being set to "Message!" just as a test. Have I missed something setting my web application to work with ASP.NET Mvc?

Any help would be greatly appreciated.

Thanks,

From stackoverflow
  • From playing with the early betas, until today, I find it easier to create a new MVC application and "import" my exsiting files into the solution than it is to get all of the settings right in an existing application. I see no reason to get hot and heavy over setting up plumbing I can have written for me from a template. Perhaps I am just too lazy.

    I find that I can get "legacy ASP.NET" web forms working fine with MVC. The only dink to the whole project is getting the app routed to my default.aspx to begin with. Much easier than running through multiple steps to include MVC.

    I know this is not a direct answer to your question, but I think trying it will solve your problem, as well. Another benefit is you leave the old "legacy" source in case a bug fix comes in. But, then, you should be under source control anyway.

Boolean[] vs. BitSet: Which is more efficient?

What is more efficient in terms of memory and CPU usage — an array of booleans or a BitSet? Specific BitSet methods are not used, only get/set/clear (==, =, Arrays.fill respectively for an array).

From stackoverflow
  • I believe that a BitSet is more memory- and CPU-efficient, is it can internally pack the bits into int, longs, or native data types, whereas a boolean[] requires a byte for each bit of data. Additionally, if you were to use the other methods (and, or, etc), you would find that the BitSet is more efficient, as there is no need to iterate through every element of an array; bitwise math is used instead.

    EFraim : Memory efficient - probably true. CPU efficient - most certainly not. It is almost always less efficient to perform two bitwise operations (shift/and or shift/or) and up to two memory accesses (though most likely cached) than a single memory access on x86.
    Jon Skeet : @EFraim: By reducing the amount of memory used you're increasing the chance of having everything in cache. Cache misses are very expensive. I wouldn't be at all surprised to see this factor make BitArray faster.
    Ron : For example: a bitset would outperform boolean[] if the whole bitset fits in the cache, but not the boolean[], and random access were required.
  • Going from Java to CPU is totally VM specific. For instance, it used to be that a boolean was actually implemented as a 32-bit value (quite probably is true to this day).

    Unless you know it is going to matter you are better off writing the code to be clear, profile it, and then fix the parts that are slow or consuming a lot of memory.

    You can do this as you go. For instance I once decided to not call .intern() on Strings because when I ran the code in the profiler it slowed it down too much (despite using less memory).

  • From some benchmarks with Sun JDK 1.6 computing primes with a sieve (best of 10 iterations to warm up, give the JIT compiler a chance, and exclude random scheduling delays, Core 2 Duo T5600 1.83GHz):

    BitSet is more memory efficient than boolean[] except for very small sizes. Each boolean in the array takes a byte. The numbers from runtime.freeMemory() are a bit muddled for BitSet, but less.

    boolean[] is more CPU efficient except for very large sizes, where they are about even. E.g., for size 1 million boolean[] is about four times faster (e.g. 6ms vs 27ms), for ten and a hundred million they are about even.

    basszero : Can you post your test?
    basszero : I suspect that some of the BitSet style operations (and, or, not) are faster as BitSet instead of array. Worth noting which operations are better. The title is going to mislead everyone into never using a BitSet again
    starblue : The test doesn't use set operations, and is biased towards writing.
  • It depends as always. Yes BitSet is more memory efficent, but as soon as you require multithreaded access boolean[] might be the better choice. For example for computing primes you only set the boolean to true and therefore you don't really need synchronization. Hans Boehm has written some paper about this and the same technique can be used for marking nodes in graph.

    Randolpho : provided that your boolean array doesn't grow, that'd certainly be better for concurrent use.
    • Boolean[] uses about 4-20 bytes per boolean value.
    • boolean[] uses about 1 byte per boolean value.
    • BitSet uses about 1 bit per boolean value.

    Memory size might not be an issue for you in which case boolean[] might be simpler to code.

  • A bit left-field of your question, but if storage is a concern you may want to look into Huffman compression. For example, 00000001 could be squeezed down by frequency to something equivalent to {(7)0, (1)1}. A more "randomized" string 00111010 would require a more complex representation, e.g. {(2)0, (3)1, (1)0, (1)1, (1)0}, and take up more space. Depending on the structure of your bit data, you may get some storage benefit from its use, beyond BitSet.

Java applet white screen

I'm trying to get to the bottom of a problem with our Java applet based program. It quite regularly seizes up with an unresponsive GUI (or a white screen). This of course only happens when deployed at a customer site :-(. They are running a version of the Sun JVM in 1.5 series (not sure the exact release).

We have a theory that it's to do with the applet running out of heap space - does that sound plausible? The other thing that I have set up on my machine is disabling direct draw, but that was mainly to avoid weird artefacts on other applications.

They are seeing the problem on Citrix and on regular PCs, but obviously there is a limit to what the users on Citrix can do.

Any suggestions?

From stackoverflow
  • In order to solve the problem, you must first be able to reproduce the problem. You will need an identical system in order to troubleshoot this, making one change at a time while keeping everything else equal to determine the cause(s).

    Allain Lalonde : "In order to solve the problem, you must first be able to reproduce the problem." Not true. It helps, but you'll never have a system exactly like the system it occurs on.
    Greg Reynolds : Absolutely - I have no access to a Citrix system for a start, or 40 concurrent users even if I did!
  • First of all ensure the customer uses the latest release of the JVM they are using, and make them enable the Java console inside their browser (this requires some research from you).

    Then when it happens again, tell them to look at the console window and cut-paste the contents in a mail to you.

  • Running out of heap space should cause an OutOfMemoryError to be thrown. This case sounds like a typical deadlock. To find where that is you want a stack dump of all the threads. IIRC< you can do it through the console, or from IIRC 1.6 the JDK includes jps and jstack.

    Greg Reynolds : Thanks - I'll see what they come back with. I expect the console will show that or the out of memory error.
  • Just to add to this answer (to build the knowledge base as I'm currently looking into this).

    There's (at least) 2 distinct white screens related to applets.

    1. Deadlock (as mentioned by Tom) - area will not refresh when you drag a window in front of it, so you get the strange tails left effect.

    2. VM crash - area will become white, Java VM closes (search for hs_err_pid*.log, location dependent on browser)