Sunday, May 1, 2011

Receive A File Via PHP Web Service

I'm busy writing a PHP webservice and wish to receive a file as an array of byte streams (you can do something similar using C#) how can I do this in PHP? Using PHP 5 with NuSOAP.

From stackoverflow
  • Perhaps you only need to send an URI and the client grab the file... no?

    You can also encode the file with base64.

  • Hey,

    what about php streams ? I have worked with streams in C# before, not in php, but seems like it should work :)

    check out this presentation ( starting from slide 18 )

    good luck !

Problems using dynamic linked libraries (wxWidgets) from a DLL

We created a plugin; it is a DLL (Run-Time Dynamic Linking) which uses a 3rd party library (wxWidgets) and also links dynamically to that. The host software seems to scan our plugin, but exported functions are not called. We checked all dependencies with DependencyWalker.

We see in the debugger that the plugin is loaded, but the DllMain is not called, and the plugin is unloaded.

We tried loading our plugin from a simple test application using LoadLibrary and GetProcAddress which recognized and called the exported functions.

Having wxWidgets linked statically worked fine, though.

Does anyone have an idea why the exported function, respectively DllMain are not called, or can point out a tool which is capable to monitor the whole DLL loading process?

From stackoverflow
  • If wxWidgets is loaded already into the process address space before your plugin is loaded (the host app could do that, or there might be another plugin linking to wxWidgets which is loaded before yours), then there might be a chance that it is another version, missing some of the entry points that your plugin needs. Running the host app under DependencyWalker or WinDbg should show you which wxWidgets DLL is loaded, and you could try to load your plugin from your test app using exactly the same wxWidgets DLL. That should reveal whether there are missing dependencies.

  • Perhaps the host software does some funky things when loading the plugin and doesn't like wxWindows.

    Anyways, try using the ProcessExplorer from the SysInternals suite to check what the process is doing.

  • there's something usefull.... maybe that can help you....

    http://taringa.net/posts/ebooks-tutoriales/6888465/wxWidgets-y-DLL-de-recursos.html

How might a site like Stack Overflow pass user information around in ASP.NET MVC?

Basically, I log into my website using OpenId, very similar to what I am assuming SO does. When I get the information back, I throw it into a database and create my "Registered User". I set my AuthCookie:

FormsAuthentication.SetAuthCookie(user.Profile.MyProfile.DisplayName, false);

Then I can use this for the User Name. However, I would like to pass in the entire object instead of just the string for display name. So my question is:

How does SO do it?

Do they extend/override the SetAuthCookie(string, bool) method to accept the User object, i.e. SetAuthCookie(User(object), bool).

What is the best way to persist a User object so that it is available to my UserControl on every single page of my Web Application?

Thanks in advance!

From stackoverflow
  • You can achieve this behavior by implementing your custom Membership Provider, or extending an existing one. The provider stores user information based on a key (or just by user name) and provides access to the MembershipUser class, which you can extend however you wish. So when you call FormsAuthentication.SetAuthCookie(...), you basically set the user key, which can be accessed be the provider.

    When you call Membership.GetUser(), the membership infrastructure will invoke the underlying provider and call its GetUser(...) method providing it with a key of the current user. Thus you will receive the current user object.

  • One way is to inject into your controller a class that is responsible for retrieving information for the current logged in user. Here is how I did it. I created a class called WebUserSession which implements an interface called IUserSession. Then I just use dependency injection to inject it into the controller when the controller instance is created. I implemented a method on my interface called, GetCurrentUser which will return a User object that I can then use in my actions if needed, by passing it to the view.

    using System.Security.Principal;
    using System.Web;
    
    public interface IUserSession
    {
        User GetCurrentUser();
    }
    
    public class WebUserSession : IUserSession
    {
        public User GetCurrentUser()
        {
            IIdentity identity = HttpContext.Current.User.Identity;
            if (!identity.IsAuthenticated)
            {
                return null;
            }
    
            User currentUser = // logic to grab user by identity.Name;
            return currentUser;
        }
    }
    
    public class SomeController : Controller
    {
        private readonly IUserSession _userSession;
    
        public SomeController(IUserSession userSession)
        {
            _userSession = userSession;
        }
    
        public ActionResult Index()
        {
            User user = _userSession.GetCurrentUser();
            return View(user);
        }
    }
    

    As you can see, you will now have access to retrieve the user if needed. Of course you can change the GetCurrentUser method to first look into the session or some other means if you want to, so you're not going to the database all the time.

  • Jeff,

    As I said in a comment to your question above, you must use the ClaimedIdentifier for the username -- that is, the first parameter to SetAuthCookie. There is a huge security reason for this. Feel free to start a thread on dotnetopenid@googlegroups.com if you'd like to understand more about the reasons.

    Now regarding your question about an entire user object... if you wanted to send that down as a cookie, you'd have to serialize your user object as a string, then you'd HAVE TO sign it in some way to protect against user tampering. You might also want to encrypt it. Blah blah, it's a lot of work, and you'd end up with a large cookie going back and forth with every web request which you don't want.

    What I do on my apps to solve the problem you state is add a static property to my Global.asax.cs file called CurrentUser. Like this:

    public static User CurrentUser {
        get {
            User user = HttpContext.Current.Items["CurrentUser"] as User;
            if (user == null && HttpContext.Current.User.Identity.IsAuthenticated) {
                user = Database.LookupUserByClaimedIdentifier(HttpContext.Current.User.Identity.Name);
                HttpContext.Current.Items["CurrentUser"] = user;
            }
            return user;
        }
    }
    

    Notice I cache the result in the HttpContext.Current.Items dictionary, which is specific to a single HTTP request, and keeps the user fetch down to a single hit -- and only fetches it the first time if a page actually wants the CurrentUser information.

    So a page can easily get current logged in user information like this:

    User user = Global.CurrentUser;
    if (user != null) { // unnecessary check if this is a page that users must be authenticated to access
        int age = user.Age; // whatever you need here
    }
    
    Jeff Ancel : I am going to look into this. I spent 8 hours yesterday making my own MembershipProvider. In a few hours I will update. This way is much easier than the way I did it yesterday. I also like what I see here.
    Jeff Ancel : I think that this is what I am going to go with. It plugs right in to what I have and only requires the one call like you mention. I very well could do this without the custom provider at all using the method you described. Thanks.
    Andrew Arnott : Glad it will work for you. I find that the asp.net membership provider is a very poor fit for redirect-based, passwordless authentication protocols such as OpenID. Any membership provider that works with OpenID can only be implemented half-way and hokey at best due to the interface that just doesn't fit very well.

Can you run Target Designer for NT 4.0 on an XP box?

I'm trying to run Microsoft Windows NT Embedded 4.0 Target Designer on a Windows XP system, but when I try to build, I get errors. Have you ever done this, and if so, is there a trick to getting it to work?

The best clue I've found so far from Microsoft is that the Development System Hardware Requirements document includes "Windows NT 4.0 Service Pack 4 or later," but I don't know if XP is considered a valid "or later" for NT 4.0.

When I try to build an image by running Target Designer in Windows NT 4.0 or Windows 2000 compatibility mode, I get this error during Building Registry / Adding component parameters:

ERROR: Couldn't get security error = 5

When I run it normally under XP, the error that comes up during Building Registry / Binding Network is:

ERROR: Failed to bind network - Incorrect function
From stackoverflow
  • You have two options (which you may have already tried);

    1. Right click on the link to the Target Designer executable and select properties, then select the Compatibility tab and set 'Run this program in compatibility mode for:' and choose 'Windows NT (SP5)' from the dropdown. You should be able to run TD then.
    2. If that does not work, download Virtual PC from Microsoft (free) and create a new Virtual Machine for Windows NT (you can get the ISO images for NT from MSDN but you need a valid account, otherwise use your CD Roms for NT).

    I hope that helps.

    Ryan

ColdFusion Query to java.sql.ResultSet

I've looked in the "undocumentation", and I can see how to create a coldfusion.sql.QueryTable from a ResultSet, but not the other way around. So, how can I extract the java.sql.ResultSet from a ColdFusion ( coldfusion.sql.QueryTable ) query object?

From stackoverflow
  • Turns out all I had to do was pass in my coldfusion.sql.QueryTable object...don't know why it works, unless ColdFusion is doing some sort of magic casting under the hood.

  • coldfusion.sql.QueryTable implements javax.sql.RowSet, which extends java.sql.ResultSet

    Thus, as you discovered, you don't need to do anything. A ColdFusion query is already a Java ResultSet.

Indented hierarchical table structure

I am trying to create a hierarchical display of nested tables, where each sub level is indented further from the parent. I'm open to using table or div. The closest I've come is below. It looks mostly correct in IE (except that the borders on the right are mashed together). In Chrome the sub item border is extending beyond the parent on the right.

I'm open to using divs as well.

<html> 
<head> 
<style type="text/css"> 
.ItemTable
{
    width: 100%;
    margin-left: 20px;
    border: solid 1px #dbdce3;
}
</style> 
</head> 
<body> 
    <table class="ItemTable">
     <tr>
      <td>Item 1</td>
     </tr>
     <tr>
      <td>
       <table class="ItemTable">
        <tr>
         <td>Item  1A</td>
        </tr>
      </td>
     </tr>
    </table>
</body> 
</html>
From stackoverflow
  • changing

    margin-left: 20px;
    

    to

    padding-left: 20px;
    

    works for me on IE7, firefox and chrome (although with chrome I had to un-maximise the window then remaximise it - looks like a rendering bug to me)

  • Looks like a couple of things. Your sub table was missing it's close tag and i added padding to the TD to help with the indent:

    <style type="text/css">
        .ItemTable
        {
            width: 100%;
    
            border: solid 1px #dbdce3;
        }
        .ItemTable td
        {
            width: auto;
            padding-left: 20px;
            border: solid 1px #dbdce3;
        }
    </style>
    
    
    <table class="ItemTable">
        <tr>
            <td>
                Item 1
            </td>
        </tr>
        <tr>
            <td>
                <table class="ItemTable">
                    <tr>
                        <td>
                            Item 1A
                        </td>
                    </tr>
                </table>
            </td>
        </tr>
    </table>
    

    Tested it in Chrome, FF, IE6, IE7 and Safari and it looks like it works.

  • Do you plan on displaying tabular data? If not you would be better just using div's for this and just applying a margin to the child element like shown below

    <style>
        #container {border:1px solid #999}
        .indent {margin-left:50px; border:1px solid #999;}
        .item {background:#99cc00;}
    </style>
    
    <div id="container">
        <span class="item">This is item 1</span>
    
        <div class="indent">
            <span class="item">This is item 2</span>
    
            <div class="indent">
                <span class="item">This is item 3</span>
            </div>
    
        </div>
    
    </div>
    

    Of course it really depends on what you are trying to display.

Can you Distribute a Ruby on Rails Application without Source?

I'm wondering if it's possible to distribute a RoR app for production use without source code? I've seen this post on SO, but my situation is a little different. This would be an app administered by people with some clue, so I'm cool with still requiring an Apache/Mongrel/MySQL setup on the customer end. All I really want is for the source to be protected. Encoding seems a popular way to go for distributing PHP apps (eg: Helpspot).

I've found these potential solutions:

  • Zenobfuscate - not all types of Ruby code is supported however, so that counts that out
  • Ruby Encoder - may be the best option, as their PHP encoder looks alright (I haven't tried it however) but it's not available yet. I've used IONcube for PHP before and it worked well, but it doesn't seem that IONcube is interested yet.
  • Slingshot - it was mentioned in the other SO post, but it solves a different problem to mine and the source is still visible.
  • RubyScript2Exe - from the doco, it's not production ready, so that counts that out.

I've heard that potentially using JRuby and distributing bytecode might be a way to achieve this, but I've never used JRuby so I'm not sure what's involved.

Can anyone offer any ideas and/or known examples? Ideally I'd love to have some kind of automated build scenario as well.

From stackoverflow
  • You can, but it wouldn't do anything to prevent somebody from reverse-engineering or modifying it. I remember there was an article about similar attempts to obfusticate Perl and how they could be effectively bypassed by a debugger and 5 minutes of effort.

    Unkwntech : I don't think obfusticating perl is needed perl is obfusticated enough by itself...
    Dan Harper - Leopard CRM : Whatever solution it is, it's not as secure as not giving anyone the app. I'm just trying to figure out how to stick a hurdle as large as possible in front of anyone trying to get to the source. There is a small element of trust there as the customers have paid to use the software.
  • If you can't wait for the delivery of RubyEncoder, then I think ZenObfuscate is the most promising. Though it may require some modifications to your source code, they do say this on their site:

    ZenObfuscate costs $2500 for a site license or is individually negotiable for other licensing schemes. Yes, that is expensive. That was on purpose. But don't let that thwart you too much. If your product is really cool and we want to see it succeed, we'll make it work. "Really cool" is not freecell.

    Of course, for $2500 (or more), you'd hope to get a few tweaks to the compiler that'd make your codebase fully supported. It might be worth engaging them in the conversation.

    Dan Harper - Leopard CRM : I didn't notice that, thanks Pete, maybe it is worth talking to them about it.
  • I'd spend less time and money on hiding/protecting your code & more time/money on getting customers.Customers beat code all day long.

    George Jempty : -1 for not answering the question
  • Take a look at JumpBox.

    I've had conversations with them on the topic, and they seem to have a solution that will work soon for Rails apps.

  • If you release the source, obfuscated or otherwise, your app will be pirated. See, for example, Mint. It depends on what you're building, but you may find that you're better off releasing the app as a hybrid of sorts: A hosted app with a well-defined API, and a component that runs on the customer's server. As long as the true value of your product lives on the server side, you don't need to obfuscate your code, and you can just release the source code unmodified. Additionally, this may also give you the opportunity to reach clients running, say, PHP rather than Ruby. See, for example, Google Analytics, HopToad, Scout, etc, etc.

    guns : ++ This is the new open/closed source world of the web application. It actually represents a really great scenario for developers - your application's value is in your service and execution, not necessarily in the code itself. You can enjoy both the freedom of open code and the benefits of limited supply.
    Dan Harper - Leopard CRM : -1 for not answering the question. This is not a question of architecture design, business models, or whether the app will be pirated or not.
    Toby Hede : I think that given there is no actual solution for this problem, and it is generally an inherently flawed approach, offering alternatives is quite reasonable.
  • You can also take a look at Mingle from ThoughtWorks studios as an example of using JRuby for this. It's a Ruby on Rails app, they run it using JRuby. They've customized jruby to load encrypted .rb files.

  • Your best option right now is to use JRuby. A little bit of background: My company (BitRock) works with many proprietary and commercial open source vendors. We help them package their server software, which is typically based on PHP, Java or Ruby together with a web server or application server (Apache, Tomcat), the language runtime and a database (typically Postgres, MySQL) into a self-contained, easy to use installer. We have a large number of PHP-based customers (including HelpSpot, which you mention) but also several Rails-based ones. In the case of the RoR customers the norm is to use JRuby together with Tomcat or Glassfish although in some cases we also bundle a native Ruby interpreter to run specific scripts that rely on libraries not yet ported to JRuby (usually not core to the application). JRuby has matured quickly and in many cases it actually runs their code faster than regular Ruby. You will need to also consider that although porting your code to JRuby is fairly straightforward, you will need to invest some time on that. You may want to check JRuby Stack which is a free installer of everything you need to get started. Good luck!

  • I'm wondering if you could just "compile" the ruby code into an executable using something like RubyScript2Exe ?

    To be honest I haven't used it but it seems like it could be what you want, even if it just packages up the scripts with the interpreter into a single executable.

Local ASP.Net connection issues

I have been trying to set-up my recently reimaged workstation for working with one of our ASP.Net applications that consists of a ASP.Net website and some C# Web Services. I can successfully start debug instances of each the services and web site. However as soon as the site attempts to connect to and use one of the web services I get the following exception.

No connection could be made because the target machine actively refused it 127.0.0.1:80

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 127.0.0.1:80

So it looks like my PC is refusing the connection to itself. My first thought was to to check the windows firewall is disabled and it is disabled (the service is not running). I do not think IT has any other firewall software running on this PC, so I am a bit stumped as to what else could be causing the connection to be refused.

From stackoverflow
  • Silly question, but do you actually have IIS installed on your machine? When you run debug instances of your applications, it's running outside of IIS. If you're running debug instances of both your site and your service, your service probably isn't running on port 80.

    Double check your IIS installation...and then set up your service running on port 80 properly.

    Danielb : D'oh! I had forgotten I needed to install IIS! In my defense its been a good year or so since I last had to set-up a PC for web development. Even still forgetting to install IIS is embarrassing!
  • you don't need IIS since VS provides with its own web platform integrated debugger. IIS IS NOT THE PROBLEM WHEN YOU SEE "connection actively refused"

InnoDB hidden auto-increment and Dual Masters

When an InnoDB table is created without a primary key a "hidden" auto inc key is created and used for indexes. Does anyone know if this key is multi-master safe?

If setting up auto incs in a database that was to be multi-master the standard mysql approach is to set auto_increment_increment and auto_increment_offset to appropriate values for your number of servers.

Does InnoDB's hidden key respect these settings, or does it even need to?

From stackoverflow
  • Since the hidden values are not shared over replication (regardless of STATEMENT or ROW binlog formats), they are safe. It doesn't matter if a slave holds the same internal value as the master. There is no relationship between an auto_incrementing field and InnoDB's internal key.

    The usual caveat regarding replication of a table with no primary key applies.

Basic MVT issue in Django

I have a Django website as follows:

  • site has several views
  • each view has its own template to show its data
  • each template extends a base template
  • base template is the base of the site, has all the JS/CSS and the basic layout

So up until now it's all good. So now we have the master head of the site (which exists in the base template), and it is common to all the views.

But now I want to make it dynamic, and add some dynamic data to it. On which view do I do this? All my views are basically render_to_response('viewtemplate.html', someContext). So how do add a common view to a base template?

Obviously I will not duplicate the common code to each separate view...

I think I'm missing something fundamental in the MVT basis of Django.

From stackoverflow
  • You want to use context_instance and RequestContexts.

    First, add at the top of your views.py:

    from django.template import RequestContext
    

    Then, update all of your views to look like:

    def someview(request, ...)
        ...
        return render_to_response('viewtemplate.html', someContext, context_instance=RequestContext(request))
    

    In your settings.py, add:

    TEMPLATE_CONTEXT_PROCESSORS = (
        'django.core.context_processors.auth',
        ...
        'myproj.app.context_processors.dynamic',
        'myproj.app.context_processors.sidebar',
        'myproj.app.context_processors.etc',
    )
    

    Each of these context_processors is a function takes the request object and returns a context in the form of a dictionary. Just put all the functions in context_processors.py inside the appropriate app. For example, a blog might have a sidebar with a list of recent entries and comments. context_processors.py would just define:

    def sidebar(request):
        recent_entry_list = Entry.objects...
        recent_comment_list = Comment.objects...
        return {'recent_entry_list': recent_entry_list, 'recent_comment_list': recent_comment_list}
    

    You can add as many or as few as you like.

    For more, check out the Django Template Docs.

    Yuval A : wonderful! thanks! fixed a small typo in the import...
    Carl Meyer : Just don't go overboard with context processors - only data that really is used on _every single page load_ belongs in a context processor, otherwise you're wasting DB queries.
  • or use a generic view, because they are automatically passed the request context.

    a simple direct to template generic view can be used to avoid having to import/pass in the request context.

  • Context processors and RequestContext (see Tyler's answer) are the way to go for data that is used on every page load. For data that you may need on various views, but not all (especially data that isn't really related to the primary purpose of the view, but appears in something like a navigation sidebar), it often makes most sense to define a custom template tag for retrieving the data.

Do I have to release the nib's top level objects if I don't load the nib manually?

As I understand it, I do have to release the top level objects (including window, custom controller objects, ...) from my Nib file, if I load it programmatically by myself.

But do I have to care about them in an simple view-based application that has only one main nib file?

And would I have to care about them when having an view-based application that uses an UITabBarController and multiple Nib files?

From stackoverflow
  • No. In all cases, the nibs are loaded once, and then unloaded when your app is closed - you don't have to manage the memory. There's no memory leak possible when you don't create more than one of a given object.

Does waveInXXX applies a lowpass filter?

When I use the (win32) waveInXXX functions in order to collect samples from a mic at a certain sampling frequency (say 8kHZ, even lower possibly), does the system/soundcard applies a lowpass filter on the input sample, or would i get aliasing? Should I sample at a higher frequency and do the filtering myself before lowering the sampling freq?

From stackoverflow
  • Interesting question. I don't know how you would find out the answer for sure (short of experimenting). I suspect that the actual sample rate of the recording might well be 44.1kHz (or 48kHz) and Windows would then do SRC, in which case a low-pass filter almost definitely would be applied. If the sound-card itself is sampling at 8kHz, one would hope that the hardware would perform the low pass filter beforehand.

    In short, don't bother implementing a low-pass filter unless you experience problems with aliasing artefacts.

    Bandi-T : I second that. Experiment with the OS environments and soundcards similar to what your software will be running on to see how often aliasing occurs. I would bet though that you would be hard pressed to find a computer sound setup in 2010 that results in aliasing.

How should I link a data Class to my GUI code (to display attributes of object, in C++)?

I have a class (in C++), call it Data, that has thousands of instances (objects) when the code is run. I have a widget (in Qt), call it DataWidget that displays attributes of the objects. To rapidly build the widget I simply wrote the object attributes to a file and had the widget parse the file for the attributes - this approach works, but isn't scalable or pretty.

To be more clear my requirements are:
1 - DataWidget should be able to display multiple, different, Data object's attributes at a time
2 - DataWidget should be able to display thousands of Data objects per second
3 - DataWidget should be run along side the code that generates new Data objects
4 - each Data object needs to be permanently saved to file/database

Currently, the GUI is created and the DataWidget is created then the experiment runs and generates thousands of Data objects (periodically writing some of them to file). After the experiment runs the DataWidget displays the last Data object written to file (they are written to XML files).

With my current file approach I can satisfy (1) by grabbing more than one file after the experiment runs. Since the experiment isn't tied to DataWidget, there is no concurrency, so I can't do (3) until I add a signal that informs the DataWidget that a new file exists.

I haven't moved forward with this approach for 2 reasons: Firstly, even though the files aren't immediately written to disk, I can't imagine that this method is scalable unless I implement a caching system - but, this seems like I'm reinvent the wheel? Secondly, Data is a wrapper for a graph data-structure and I'm using Graphml (via Boost Graph Library i.e. *write_graphml()*) to write the structure to XML files, and to read the structure back in with Boost's *read_graphml()* requires me to read the file back into a Data object ... which means the experiment portion of the program encodes the object into XML, writes the XML to a file (but hopefully in memory and not to disk), then the DataWidget reads the XML from a file and decodes it into an object!

It seems to me like I should be using a database which would handle all the caching etc. Moreover, it seems like I should be able to skip the file/database step and pass the Data to the DataWidget in the program (perhaps pass it a reference to a list of Data). Yet, I also want to save the Data to file to the file/database step isn't entirely pointless - I'm just using it in the wrong way at the wrong time.

What is the better approach given my requirements?

Are there any general resources and/or guidelines for handling and displaying data like this?

From stackoverflow
  • It seems to me like I should be using a database which would handle all the caching etc. Moreover, it seems like I should be able to skip the file/database step and pass the Data to the DataWidget in the program (perhaps pass it a reference to a list of Data). Yet, I also want to save the Data to file to the file/database step isn't entirely pointless - I'm just using it in the wrong way at the wrong time.

    If you need to display that much rapidly changing data, having an intermediate file or database will slow it down and likely become the bottleneck. I think the Widget should read the newly generated data directly from memory. This doesn't prevent you from storing the data in a file or database though, it can be done in a separate thread/process.

    bias : How do I couple the classes so that the Widget class can access the Data properties? Do I inherit the Widget from the Data?
  • If all of the data items will fit in memory, I'd say put them in a vector/list, and pass a reference to that to the DataWidget. When it's time to save them, pass a reference to your serializing method. Then your experiment just populates the data structure for the other processes to use.

  • I see you're using Qt. This is good because Qt 4.0 and later includes a powerful model/view framework. And I think this is what you want.

    Model/View

    Basically, have your Data class inherit and implement QAbstractItemModel, or a different Qt Model class, depending on the kind of model you want. Then set your view widget (most likely a QListView) to use Data for its model.

    There are lots of examples at their site and this solution scales nicely with large data sets.

    Added: This model test code from labs.trolltech.com comes in real handy:

    http://labs.trolltech.com/page/Projects/Itemview/Modeltest

    bias : Perfect, this also answers my concern of how do I uncouple the Data and DataWidget classes!
    Mark Beckwith : Great, I added a link to the modeltest code that is really useful.

Redirect web request

I use a third-party application that requests a config file from their Internet site. That file is out of date, but I can create my own file with the updated information.

How can I redirect any requests coming from my computer for a specific URL to a different file? For example, if any application requests 'http://www.theirsite.com/path/to/file.html', cause it instead to receive 'http://www.mysite.com/blah.html' or 'C:\My Documents\blah.html'?

From stackoverflow
  • Best thing would be to add an entry to your hosts file.

    Check this out:

    http://vlaurie.com/computers2/Articles/hosts.htm

  • Why not change your hosts file to point lookups to their website to a web server that you control ? Then add your own config file. You may have to redirect other requests to their server though, if the app uses the website for other resources.

How to design an email system?

I am working for a company that provides customer support to its clients. I am trying to design a system that would send emails automatically to clients when some event occurs. The system would consist of a backend part and a web interface part. The backend will handle the communication with a web interface (which will be only for internal use to change the email templates) and most important it will check some database tables and based on those results will send emails ... lots of them.

Now, I am wondering how can this be designed so it can be made scalable and provide the necessary performance as it will probably have to handle a few thousands emails per hours (this should be the peek). I am mostly interested about how would this kind of architecture should be thought in order to be easily scaled in the future if needed.

Python will be used on the backend with Postgres and probably whatever comes first between a Python web framework and GWT on the frontend (which seems the simplest task).

From stackoverflow
  • This sound to me, that you're trying to optimize for batch processing, where the heat doenst happen on the web interface but in the backend. This also sounds a job for a queuing architecture.

    Amazon offers queuing systems for instance if you really need massive scale. So you can add multiple machines on your side to deliver the messages as eMails. So you allow one machines only taking perhaps 100 messages from the queue at one time.

    The pattern with eMail systems should be asychonous, so have a look at other asynchonous archictures if you dont like queues.

  • This is a real good candidate for using some off the shelf software. There are any number of open-source mailing list manager packages around; they already know how to do the mass mailings. It's not completely clear whether these mailings would go to the same set of people each time; if so, get any one of the regular mailing list programs.

    If not, the easy answer is

    $ mail address -s subject < file
    

    once per mail.

    By the way, investigate the policies of whoever is upstream from you on the net. Some ISPs see lots of mails as probable spam, and may surprise you by cutting off or metering your internet access.

    Decio Lira : +1 for the off the shelf hint and ISP surprise!
  • A few thousand emails per hour isn't really that much, as long as your outgoing mail server is willing to accept them in a timely manner.

    I would send them using a local mta, like postfix, or exim (which would then send them through your outgoing relay if required). That service is then responsible for the mail queues, retries, bounces, etc. If your looking for more "mailing list" features, try adding mailman into the mix. It's written in python, and you've probably seen it, as it runs tons of internet mailing lists.

    codeape : +1 Using a local MTA is essential in this case.
    MarkR : Yep use a local MTA, monitor the queue lengths and delivery status.
  • You might want to try Twisted Mail for implementing your own backend in pure Python.

  • You might want to check out Lamson, a state machine-based e-mail server written in Python that should be able to do what you have described. It's written by Zed Shaw, and he blogged about it recently here.

Advice on cold calling for jobs

I've been looking for a new job lately, since COBOL really isn't why I got into this industry.

The popular job sites don't seem to be listing anything, and I haven't really found anyone with jobs listed on their site, so I'm considering going another route - cold calling (well, cold emailing).

I was some listings from a few months ago on a job site, trying to get a feel for some of the local companies. I found a place that sounds like it could be decent (a friend of mine apparently used to work there, but that was nine years ago and he was in QA), and I'm trying to figure out how to look into getting a job there.

I probably don't know their tech of choice (VB, C#, and a brief mention of Delphi), but that's sort of the point. I'm basically looking for a good company that can mentor me into a real developer.

Any advice on how to go about contacting these people, or what to say when I do?

From stackoverflow
  • Networking is the best route. Go to some user group meetings for technologies you're interested in, or are popular in your area. Get to know some people, make sure they know you. If possible, get a speaking slot - this will let you show your expertise in the technology, as well as giving you a good motivator for digging into a topic. The best jobs are ones you get referred to, not the ones you go hunting.

    Harper Shelby : Assuming that's not all tongue in cheek...I don't recommend pretending to care about something. It's usually possible to find *some* networking group you actually do care about, even if it's not a passion (e.g. you care about it because it makes you employable). Faking it will generally fail in the long run anyway.
  • The most important thing to do when calling for jobs is to get the name of the person who is going to be making the hiring decisions (a Development Manager ideally). Calling without a name is going to result in not getting past the receptionist, and aiming too high (i.e. executives) is going to get you dismissed.

    These can be a surprisingly hard thing to find. Most companies will only list executive-level employees on their website. LinkedIn can be a good resource, but if you don't have a fairly strong network yourself, you may be prevented from seeing some key details. Don't worry about an e-mail address, if you have a name you can call and also try some common e-mail combinations (first initial, last name, or first.last@company.com).

    Cold-calling alone also may not be your best strategy. I'd recommend sending a resume and cover letter, and following up with a phone call a day later. Your resume might not be read when it appears unsolicited in an e-mail box, but at least the manager might remember the e-mail, and at least he has something to go over when he's having a phone conversation with you.

    Another thing to consider might be sending your resume to a recruiter. In my last round of job searching, I had 3 interviews set up within a week, and 2 offers a week later. If you find a reputable tech recruiter it can be very valuable.

    Gavin Miller : -1 The majority of your answer is called spamming.
    Erik Forbes : Spam is *commercial* unsolicited email - email with the express purpose of selling a product or service. Since he wants to be hired as an employee and not a consultant, this doesn't qualify as spam.
  • There are a couple of different routes to my mind:

    1) Networking - If you can find local user groups that may be of an interest to you this is probably a good way to see where a lot of other people are and possibly get some advice from those that have been in the field for a while. Granted you may get the odd brush off or a, "Why are you asking that?" kind of response, it is a way to get some ideas.

    2) Recruiting firms - Granted these firms may be looking for those above junior in most cases, there may be some that can help you find something or make suggestions.

  • You're running a small scale Permission Marketing campaign on brand you. Unfortunately you haven't got permission from your cold call targets, unless they have a hiring need that is. So do some basic research,

    1) Every time a company wins a contract big enough to hit the press you can be sure that somewhere there's an IT requirement.

    2) Find out what tech the company uses - getting through to a sysadm is normally straightforward, if you ask the right questions you'll find what tech/projects are being run.

    3) don't apply to companies that are laying people off.

    4) Don't bother with companies in the midst of a downsize, takeover.

    At least now when you call, you've qualified your leads, next find out first what they're problems are, don't lead with your skills. Customers buy fixes to problems not technical ability.

  • Do a little bit of research on your targets' businesses, then stress what skills and past experiences you have that would make you successful as part of their team. This is the sort of information you'll want to include in your cover letter.

    If you don't match the skills they're looking for to a T, they may still be impressed by your eagerness to learn more about the company and to find a role with their team.

  • IMO, you're better off learning the industry trends and focusing on the skills that complement your chosen career path, than to try joining a tech company without having prior knowledge of what technologies it uses.

    Technical companies are not as dependable as companies in most other established industries, so you need to keep yourself flexible.

    If you're still keen on working at the company, I'd suggest networking to figure out what technologies are in use, then ramping up your technical skills so you have a better chance at passing the technical interview.

  • One way that I cold called when I recently moved to a new city was to find/create a list of all the software development shops in town; match them with my skill set and then use their websites to look for job postings. This was incredibly effective and I had a batting average (responses/resumes sent) of ~65-75%.

  • How about getting in touch with your friend who actually worked at this company nine years ago?

    Even if his information is out of date, he may be able to tell you something about the company or the people working there that would increase your chances of getting an interview signficantly.

    He may also tell you something that might make you decide not to work at this company.

    Provided he did a good job while working at this company, having your friend introduce you to someone on the inside is probably worth more than any cold calling strategy.

  • In addition to searching on your own, I would create an account on places like Monster and Dice and post your resume. List your skills, education, and experience and describe the type of work you're looking for. The recruiters and companies will then find you. You can hide your contact information so that they only way they can communicate with you is through the job posting service.

    I would also create an account on LinkedIn and put that same information in your profile. LinkedIn is another place that recruiters love to look. Build your LinkedIn network by linking to people you know.

    Love em or hate em, recruiters hold the keys to a lot of the employment kingdoms. Some companies use recruiters exclusively to fill job positions because it's a lot easier for them to do searching that way. Find some recruiting agencies in your area and call them. Recruiters have far larger networks than you could ever hope to create.

    I would avoid cold calling companies. Personally, I find that to be obnoxious. Calling in response to a job posting is fine, but just calling out of the blue can create ill will with whomever you are contacting.

    It sounds like your looking for a junior position, which means that generally speaking it doesn't really matter which languages/technologies you have experience with as long as you have some sort of programming experience. Companies that are looking for junior developers are expecting to need to teach you a lot of things and are typically just looking for an intelligent person with a good attitude.

  • Better stick to COBOL if you're good at it.

    If you want to learn something new, better do that in your spare time. Pet projects are fun anyway:P

  • Read Nick Corcodilos' Ask the Headhunter. He will teach you how to contact companies about jobs, and how to research them first. ATH is a website, excellent email newsletter, book, and blog. Read all of the above, regularly, even after you get a job.

Autocomplete on appended field in JQuery

I am using JQuery to create additional input fields via clicking a link. Currently, I have an autocomplete plugin implemented that works fine for the fields that were created on page load. When the user adds new fields the autocomplete does not work for those specific fields. I was just wondering if someone could help me figure out how to get it to work.

<script type="text/javascript">
    $(document).ready(function() {
        $('#addingredient').click(function() {
            $('<li />').append('<input type="text" class="ingredient" name="ingredient[]" id="ingredient[]" size="60" />')
            .append('<input type="text" class="amount" name="amount[]" id="amount[]" size="5" />')
            .append('<select class="unit" name="unit[]" id="unit[]"><?=$units ?></select>')
            .appendTo('#ingredients')
            .hide()
            .fadeIn('normal');
        });
</script>

<script>
    $(document).ready(function(){
        var data = "http://mywebsite.com/ingredients.php";
        $(".ingredient").autocomplete(data);
    });
</script>


<ul id="ingredients">
    <li><input type="text" class="ingredient" name="ingredient[]" id="ingredient[]" size="60" /><input type="text" class="amount" name="amount[]" id="amount[]" size="5" /><select class="unit" name="unit[]" id="unit[]"><?=$units ?></select></li>
    <li><input type="text" class="ingredient" name="ingredient[]" id="ingredient[]" size="60" /><input type="text" class="amount" name="amount[]" id="amount[]" size="5" /><select class="unit" name="unit[]" id="unit[]"><?=$units ?></select></li>
    <li><input type="text" class="ingredient" name="ingredient[]" id="ingredient[]" size="60" /><input type="text" class="amount" name="amount[]" id="amount[]" size="5" /><select class="unit" name="unit[]" id="unit[]"><?=$units ?></select></li>
</ul>
From stackoverflow
  • The problem is with the autocomplete only being initialized on page load. Thus not being added to dynamicly added inputs. You should add the autocomplete to those too by calling the autocomplete again. So after you have apended the new input just call the autocomplete function again:

     $(".ingredient").autocomplete(data);
    
    Joe Philllips : I tried that and it didn't work.
    Joe Philllips : Or maybe I did it incorrectly.
    Joe Philllips : It "didn't work" because I wasn't using it correctly. The other code made it explicit as how to use it which is why I accepted it. It was a close call either way.
    Pim Jager : Ah ok, thanks for explaining.
  • Because you're doing the autocomplete before a new one is created. It doesn't just auto-apply it, it only does it when the DOM is ready.

    <script type="text/javascript">
    
    $(document).ready(function() {
        $('#addingredient').click(function() {
            $('<li />').append('<input type="text" class="ingredient" name="ingredient[]" id="ingredient[]" size="60" />')
            .append('<input type="text" class="amount" name="amount[]" id="amount[]" size="5" />')
            .append('<select class="unit" name="unit[]" id="unit[]"><?=$units ?></select>')
            .appendTo('#ingredients')
            .hide()
            .fadeIn('normal');
    
            var data = "http://mywebsite.com/ingredients.php";
            $('.ingredient').autocomplete(data);
        });
    }
    
    </script>
    

    Try that instead.

    tvanfosson : This will rerun autocomplete on all elements, not just the new one. That's probably ok, but overkill.
    Kezzer : Aye, I was thinking of that as it's something I've done. If it's for a small amount it shouldn't be too much pressure on the browser.
    Pim Jager : Just wondering, how excactly does this defer from my answer? (On which d03boy commented that is doesn't work)
    Joe Philllips : For extra credit, how can I NOT request the data from the website every time? I'm not quite sure how to mix javascript and PHP in that aspect.
    Kezzer : @d03boy: tvanfosson's answer is by far the most correct answer in this thread and will also be the most efficient. Use that one.
    Joe Philllips : Yes, I've just tried it out and it works.
  • You need to rerun the autocomplete on the new element, after it has been added to the DOM. The following will wait until the element has been faded in, then sets up the autocomplete on the new element with the correct class.

    <script type="text/javascript">
        var data = "http://mywebsite.com/ingredients.php";
        $(document).ready(function() {
            $('#addingredient').click(function() {
                $('<li />').append('<input type="text" class="ingredient" name="ingredient[]" id="ingredient[]" size="60" />')
                .append('<input type="text" class="amount" name="amount[]" id="amount[]" size="5" />')
                .append('<select class="unit" name="unit[]" id="unit[]"><?=$units ?></select>')
                .appendTo('#ingredients')
                .hide()
                .fadeIn('normal', function() {
                    $(this).find('.ingredient').autocomplete(data);
                });
            });
            $(".ingredient").autocomplete(data);
        });
    </script>
    
    Joe Philllips : I just need to look into caching the URL data now.. hrm.

LINQ to Objects - Is not in?

I have a generic list of custom objects and would like to reduce that list to objects where a specific property value is not in a list of exclusions.

I have tried the following:

Private Sub LoadAddIns()
  // Get add-in templates
  Dim addIns = GetTemplates(TemplateTypes.AddIn)
  // Get the current document
  Dim sectionId As String = CStr(Request.QueryString("sectionId"))
  Dim docId As Integer = CInt(Split(sectionId, ":")(0))
  Dim manual = GetTempManual(docId)
  Dim content As XElement = manual.ManualContent
  // Find which templates have been used to create this document.
  Dim usedTemplates = (From t In content.<header>.<templates>.<template> _
                       Select CInt(t.<id>.Value)).ToList
  // Exclude add-ins that have already been used.
  If usedTemplates IsNot Nothing Then
    addIns = addIns.Where(Function(a) usedTemplates.Contains(a.TemplateID) = False)
  End If
  // Bind available add-ins to dropdown
  With ddlAddIns
    .DataSource = addIns
    .DataTextField = "Title"
    .DataValueField = "TemplateID"
    .DataBind()
    .Items.Insert(0, New ListItem("[select an add-in]", 0))
  End With
End Sub

but get the error:

System.InvalidCastException: Unable to cast object of type 'WhereListIterator1[MyApp.Classes.Data.Entities.Template]' to type 'System.Collections.Generic.List1[MyApp.Classes.Data.Entities.Template]'.

How can I select only the templates where the template id is not in the list of exclusions?

Thanks, Nick

From stackoverflow
  • Tack a ToList() extension to the end of the Where extension to convert it back to a List of the appropriate type.

    If usedTemplates IsNot Nothing Then
        addIns = addIns.Where(Function(a) usedTemplates.Contains(a.TemplateID) = False) _
                       .ToList()
    End If
    
    Nick : Spot on, thanks! :D
    PSU_Kardi : Thank's a ton! Had the same problem in C# , the ToList() was the key!

How to maintain a list of functions in C++/STL ?

Hi,

Before asking you my question directly, I'm going to describe the nature of my prolem. I'm coding a 2D simulation using C++/OpenGL with the GLFW library. And I need to manage a lot of threads properly. In GLFW we have to call the function: thread = glfwCreateThread(ThreadFunc, NULL); (the first parameter is the function that'll execute the thread, and the second represents the parameters of this function). And glfwCreateThread, has to be called every time! (ie: in each cycle). This way of working, doesn't really help me, because it breaks the way i'm building my code because i need to create threads out of the main loop scope. So I'm creating a ThreadManager class, that'll have the following prototype :

class ThreadManager {

  public:
         ThreadManager();
         void AddThread(void*, void GLFWCALL (*pt2Func)(void*)); 
         void DeleteThread(void GLFWCALL (*pt2Func)(void*));
         void ExecuteAllThreads();

  private:
         vector<void GLFWCALL (*pt2Func)(void*)> list_functions;
         // some attributs             


};

So for example, if I want to add a specific thread I'll just need to call AddThread with the specific parameters, and the specific function. And the goal is just to be able to call: ExecuteAllThreads(); inside the main loop scope. But for this i need to have something like:

void ExecuteAllThreads() {

      vector<void GLFWCALL (*pt2Func)(void*)>::const_iterator iter_end = list_functions.end();
      for(vector<void GLFWCALL (*pt2Func)(void*)>::const_iterator iter = list_functions.begin();
      iter != iter_end; ++iter) {

           thread = glfwCreateThread(&(iter*), param);
      }
}

And inside AddThread, I'll just have to add the function referenced by the pt2Func to the vector : list_functions.

Alright, this is the general idea of what i want to do.. is it the right way to go ? You have a better idea ? How to do this, really ? (I mean the problem is the syntax, i'm not sure how to do this).

Thank you !

From stackoverflow
  • What about trying to store them using boost::function ?

    They could simulate your specific functions, since they behave like real objects but in fact are simple functors.

    iain : +1 For boost::function. This is the ideal way to store function pointers and/or function objects with the same signature. This will also mean that additional arguments can be passed with boost::bind with the function object is created.
  • You need to create threads in each simulation cycle? That sounds suspicious. Create your threads once, and reuse them.

    Thread creation isn't a cheap operation. You definitely don't want to do that in every iteration step.

    If possible, I'd recommend you use Boost.Thread for threads instead, to give you type safety and other handy features. Threading is complicated enough without throwing away type safety and working against a primitive C API.

    That said, what you're asking is possible, although it gets messy. First, you need to store the arguments for the functions as well, so your class looks something like this:

    class ThreadManager {
    
      public:
             typedef void GLFWCALL (*pt2Func)(void*); // Just a convenience typedef
             typedef std::vector<std::pair<pt2Func, void*> > func_vector;
             ThreadManager();
             void AddThread(void*, pt2Func); 
             void DeleteThread(pt2Func);
             void ExecuteAllThreads();
    
      private:
             func_vector list_functions;
    };
    

    And then ExecuteAllThreads:

    void ExecuteAllThreads() {
    
          func_vector::const_iterator iter_end = list_functions.end();
    
          for(func_vector::const_iterator iter = list_functions.begin();
          iter != iter_end; ++iter) {
    
               thread = glfwCreateThread(iter->first, iter->second);
          }
    }
    

    And of course inside AddThread you'd have to add a pair of function pointer and argument to the vector.

    Note that Boost.Thread would solve most of this a lot cleaner, since it expects a thread to be a functor (which can hold state, and therefore doesn't need explicit arguments).

    Your thread function could be defined something like this:

    class MyThread {
      MyThread(/* Pass whatever arguments you want in the constructor, and store them in the object as members */);
    
      void operator()() {
        // The actual thread function
      }
    
    };
    

    And since the operator() doesn't take any parameters, it becomes a lot simpler to start the thread.

    Amokrane : Thank you for this snippet ! This is exactly what I wanted to do.. but you are right: 1/I find it suspicious too, but it seems to work like this :/ 2/ I'm going to take a look at the boost library, i'm not familiar with it but as the threads represent a crucial point of my simulation, I think it's worth learning it!
  • I am not familiar with the threading system you use. So bear with me.

    Shouldn't you maintain a list of thread identifiers?

     class ThreadManager {
         private:
           vector<thread_id_t> mThreads;
         // ...
     };
    

    and then in ExecuteAllThreads you'd do:

     for_each(mThreads.begin(), mThreads.end(), bind(some_fun, _1));
    

    (using Boost Lambda bind and placeholder arguments) where some_fun is the function you call for all threads.

    Or is it that you want to call a set of functions for a given thread?

    Amokrane : I don't know what does exactly the bind function of the boost library.. the idea of maintaining a list of threads identifiers is good but i don't think it can be applied to glfw system threading anyway..will think about that.. thanks!
  • Consider Boost Thread and Thread Group

    Amokrane : thank's.. will do!