Thursday, January 20, 2011

How to set chroot for SFTP under Ubuntu 8.04?

Ubuntu 8.04 LTS comes with OpenSSH 4.7, before the ChrootDirectory parameter was introduced. How can I upgrade OpenSSH to 4.9+? Alternatively, without upgrading OpenSSH, how can I set chroot?

  • rssh (restricted shell) exists on 8.04 and it can be used to setup a an account that is restricted to sftp and/or scp a specific folder.

    From Zoredache
  • Have you looked for the later release in the backports repository? As a key component of Ubuntu it could well be in there at a later version.

    netvope : I've uncommented the backports repositories in "/etc/apt/sources.list" and ran "apt-get update", but it still can't find newer versions. How can I install backports packages?

HOSTS / LMHOSTS files on XP System in a AD Domain

I have found that with in my Active Directory (Windows 2003 Interm), there are 4 DC and each is a GLOBAL CATALOG SERVER. So in theory any should be able to authenticate users.

that our XP Clients have a lengthy HOSTS and LMHOSTS file (that are both the same entries)

My concern is that I had an issue with one of my AD Servers (the one that hold the PDC ROLE) and it was down for a few hours, I think the entries in the HOSTS/LMHOSTS did not help my issue. I was able to swap the roles form this server to one of the alternative ones, though some XP systems still did not want to play nice.

192.168.1.2 "BDC_NT \0x1b" #PRE 192.168.1.2 AD-PDC #PRE #DOM:BDC_NT
192.168.1.3 AD-BDC1 #PRE #DOM:BDC_NT
192.168.1.4 AD-BDC2 #PRE #DOM:BDC_NT
192.168.1.5 AD-BDC3 #PRE #DOM:BDC_NT

Would these entries hinder the users ability to connect to servers and authenticate with the Global Catalogs when that Entry for the first line is referencing a server that is the one that when t off line? It looks like that would over-ride some if not all the other Domain Controllers on the network and cause issues with people trying to log into the systems.

Am I close or way off base on this one? I have always been the type to keep a really clean HOSTS and LMHOSTS files and let the DNS and WINS take care of the resolutions so that systems can change in such a case.

  • Why are you using the hosts/lmhosts files in the first place? That's just begging for problems. If you're AD domain is native you should just lose those files and let DNS take care of things.

    Even if it's not native, if your PC's are joined to the domain then there are very few reasons to have a big long hosts/lmhosts file with entries related to the domain they're members of common to all of them.

    Adam M. : It is something that I got handed over. It was a suprise to be and with the issue of the server down it was a nasty thing to find out. I have not used HOST or LMHOST since days of DOS...
    squillman : @Adam: haha, yeah. Been there :) Toss those bad boys, you'll be happier.
    From squillman
  • The last time I had to use LMHOSTS was to enable NETBIOS logons across subnets for NT systems when we had serious problems with WINS servers that weren't reliable. I can't see any reason at all to have any entries in LMHOSTS on a W2K3 domain with XP clients. The #DOM #PRE entries really are going to mess with the XP clients when you have to carry out any maintenance on your DC's (as you have found out).

    If you have DNS servers then there is no reason whatsoever to have hosts files. There may be some argument for individual use of hosts files but from a SysAdmin perspective you really do not want to have to be worried about the hassle of managing them, especially on client PC's. Host files only handle name resolution in too so they are no of no use in a domain context if the actual DNS fails, there's no way for the SRV queries that are needed to support domain logons to be handled.

    In short if you have an operational WINS infrastructure and a Windows 2000 (or newer) domain and all your clients are Windows 2000 (or newer) then you should not use hosts or lmhosts files.

    From Helvick

Fix a Truecrypt Volume

I've have a backup volume using truecrypt (USB Disk, entire disk (eg /dev/sdb)) accidentally added to a Windows System and then using windows that disk has been initialized.

No other action has been taken on that disk.

The disk no longer mounts.

Assuming no other external sources (backups of the headers etc) is it possible to fix that volume?

IIS7: How to import public key and private key as two seperate files?

We have a client who is directing their traffic to our web servers and needs us to use their wildcard SSL certificate. They gave it to me in two pieces though, one is the public key (.cer) and another file containing the private key (.key). I can't figure out how to get these two to come together in IIS so I can bind it to a site. Assistance is greatly appreciated. Thanks!

  • You need to load the certificate into the certificate store on your computer. The certificate store is an MMC snap-in.

    From JD
  • You may need to use OpenSSL to convert the file formats to PFX and then use the Certificates MMC snap-in to import them into the local computer's personal store.

    The OpenSSL command is something like this -

    openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key -in certificate.crt -certfile CACert.crt
    
    From Doug Luxem
  • You could ask your client if the certificate could be exported as .pfx file. You can easily import the pfx file using IIS Manager.

    Here is a link to a tutorial: PFX Export/Import Explained - How to Import and Export your SSL Certificate in IIS 7

    From splattne
  • You'll just need to convert the separate certificate files to a .pfx to import it into IIS: https://www.sslshopper.com/ssl-converter.html

    From Robert
  • openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key -in certificate.crt -certfile CACert.crt

    This one is the perfect answer. Now I can use a common UCC ssl certificate installed on both Linux and Windows servers.

    Cool fix !! Linux Rocks :-)

    From Liju

Executing command after receiving an email

How would I set Postfix to execute a command when it receives an email to a given address/username, or perhaps an email containing some text?

  • There isn't any way to match upon text, but you can forward all messages to an address to a program.

    You need to add an alias to your system aliases file, usually /etc/postfix/aliases or to the user's .forward file. The first option has the better flexibility, cause you can have an aliases for an address which doesn't actually map to an account.

    The alias should be something like

    |/usr/local/bin/command

    You should give a full path, because you don't know the context that it will be executed in.

    If your program exits with 67, then this will be bounce the message as unknown user, 0 will drop the message. Anything else will be retried until the message times out and bounces.

    Be careful of security - you're basically allowing anyone on the Internet to run a program on your system, so don't trust user input, and sanitize it before you use it.

    From gorilla
  • It's been a while since I played with Postfix a lot, but IIRC it usually came bundled with a fairly basic MDA, but this could still understand .forward files in the users home dir, you'd need to read the docs and your postfix config to find what MDA is configured.

    The daddy of all MDAs (IMHO) is procmail. You can substitute procmail for the current MDA in your main.cf - see http://www.postfix.org/faq.html#procmail

    Procmail reads a file in the users home directory to determine how to process messages. This goes way beyond just being a config file - its more like a programming language. Its certainly capacble of what you ask.

    C.

    Patrick R : +1 - nice link - that part about "Sending mail to a FAX machine" might be a perfect example for what starsky is looking for.
    From symcbean

Windows: Running an AutoIt script to launch a GUI app - on a server, when no one is logged in

I want to run an AutoIt script every day at 1:00 AM on a Windows 2003 Server Standard Edition. Since this is a server, obviously there is rarely someone sitting there logged in at the console, so the procedure needs to account for this.

The AutoIt script in question launches and sends keypresses to a GUI app, so the process needs to include creating some sort of session for the user running the schedule task.

Is there a way to do this?

  • I can't just use scheduled tasks run the AutoIt script when no one is logged in - if I do, it fails to launch at all.
  • I thought that I might be able to create an RDP session and run the scheduled task as that user, inside that session, but I haven't found a way to create an RDP session without launching mstsc.exe -- which is itself a GUI app, and I have the same problem again.
  • How to use Schtasks.exe to Schedule Tasks in Windows Server 2003

    And for AutoIt -- Task Scheduler UDF (User Defined Function).
    Has a AutoIt function for the purpose.

    You will find more such useful functions at the AutoItScript Wiki UDF page.

    Listing of libraries of user defined functions
    These libraries have been written to allow easy integratation into your own script and therefore are very valuable resources for any programmer.

    From nik
  • I am not sure if srvany from Microsoft allows to run GUI apps as a service, but AlwaysUp does. You could then use Windows Scheduled Tasks or anything else to make sure your script runs at the desired time.

    From SvenW
  • you can launch Remote Desktop from the command line. use AutoIt on a machine to RemoteDesktop into your target server.

Group Policy: Trusted Location in User's My Documents?

I am trying to use group policy to add a subdirectory of the user's home directory as a trusted location for Microsoft Access 2007 (User Configuration/Administrative Templates/Microsoft Office Access 2007/Application Settings/Security/Trust Center/Trusted Locations). However, where I'm having difficulty is that it doesn't seem like the group policy works with a relative path (%userprofile%\My Documents\Subdirectory). Is that true? If so, would a feasible workaround be a loginscript that adds the appropriate registry key?

  • for the logon script way, you should add the path to:

    HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Access\Security\Trusted Locations\LocationX
    

    Where X is the location number

    for %userprofile%, i didn't try yet

    Tony Toews : Just an FYI. You don't actually need to use Locationx after \Trusted Locations\. You can put anything in there that you want such as the name of your application. I would suggest not using Location as some folks may already have some locations defined and conceivably you could wipe thiers out.
  • An alternative would be to use the Auto FE Updater. This utility has an option to set the trusted location of the FE automatically. It will also copy down new Access FEs and associated files when updates are made available on the server, create shortcuts and more.

    I would also suggest using a subfolder of the %appdata% aka Application Data folder to store the Access FE and associated files as this is somewhat hidden and thus less likely for the users to muck with the files.

    David W. Fenton : When you recommend AppData are you taking account of the new layout in Vista/Win7 with the roaming folder, which is where this data is supposed to go? How does one determine what folder %appdata% is? Using the environment variable or Windows API?
    Tony Toews : David, I don't know enough about roaming folders in Vista/Win 7. I haven't had any questiohs on that line from the users. In the AutoFEUpdater the %appdata% is the special folder API call using the constant CSIDL_APPDATA and not the environment variable.
    From Tony Toews

Does (when?) OpenSolaris support ZFS deduplication and l2arc

Deduplication and L2ARC in ZFS would be nice to have. Does OpenSolaris support them? I can't quite figure out which version of Solaris they are in, and how that maps to OpenSolaris. Are they there? If not, do you know when they are scheduled.

  • It's coming either this month or next but only in dev builds for now, it may be in 128 but more likely a little after that. Sit tight, it'll be worth the wait :)

    emgee : It's in svn_128a. The current dev build is svn_129.
    Chopper3 : Good news, thanks.
    From Chopper3
  • L2arc has been in OpenSolaris for a while now. Dedup is already in the development builds of OpenSolaris. If you can't wait until the next release here are some directions on how to upgrade to the development branch.

    I run the development branch on my file server at home. Which is currently build 129. You have to be careful and wait a few days to make sure there aren't any gotcha's before doing an image-update but I haven't run into any problems doing this. And if you do you just reboot in the previous BE...gotta love zfs clones. I'm not using L2arc but I did play around with dedup a little the other day on a zfs volume.

  • OpenSolaris ZFS deduplication was released in November. In development status for about a year.

  • OpenSolaris' latest development releases support both dedup and L2ARC. Solaris 10 update 8 (which was released in October) has L2ARC, but not dedup.

    The L2ARC code seems well tested and stable, and if you workload involves a lot of random reads, it is very likely to help.

    The dedup code is not thoroughly baked at this time and I would recommend that you only use it on test machines without any important data.

    To track the status: http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup and note in particular:

    • Pool might hang if large files or file systems are removed when dedup is enabled. See CR 6905936
    • System panic during operation after large amounts of data have been removed. See CR 6909931.

    I am sure the bugs are being worked on, and dedup might be in a usable state sometime soon, but please don't jump into using a brand new feature in a development release until you are sure its level of stability matches your risk tolerance.

  • The easiest way to get dedup at present is to go to the GenUNIX site, download the latest OpenSolaris preview release on ISO, and install that. Choose one of the AI ISOs because the installer is better.

Restrict Computer or Users from Internet but allow access to intranet and Windows Update / ePO?

So this may be impossible but I've been asked to try and find something about it. So far nothing I have found is possible.

I need to restrict specific machines or user accounts from regular Internet access but let them have access to the intranet portion of our network. I do not have Active Directory control, nor does anyone at my local workplace (corporate control in a different state). I have tried going through IPsec and doing this per local machine, but that system seems to have been removed from the images that are installed on these machines so that is out.

So far the only other option I can think of is assigning the machines a specific ip address and removing their gateway access. This would probably work but the machines need to be able to receive updates that are being pushed to them through ePO and LanDesk.

I would really like to do this on the user level because then if I need to do tech work to the machine and need internet access I can get to it but a "special" user could login and not be able to get into anything.

  • I found out how I'm going to do it. Created a special noaccess.rat file for content advisor for internet explorer. Added the addresses that they need access to and nothing else. Problem solved.

    Skaughty : Curses.. I was trying to type that before you answered. You could also look at rules on the external router/firewall.
    MoSiAc : Well we're thinking that would be the most secure but everything here is outsourced so we would have to call the guys to come set that up. We are looking into that though. Kinda wondering if content advisor allows wildcards. We've also tried setting PROXY to NO in internet explorer, which usually keeps users out for about a day.
    From MoSiAc
  • External firewall / router is probably the most secure. You could set up a walled garden / captive portal (much like the ones that you get when you log into a wifi hotspot) which permits access to your update services but nothing else unless a superuser password is entered.

    MoSiAc : This would be the way we would like to go, but again we don't have that kind of control here sadly. Just asked to complete a task local machine side.
  • This is definitely something that is better done on the network.

    You could use a cheap router hacked to use something like http://www.dd-wrt.com/

    You could connect this between your company network and the computers you want to isolate.

    You should be able to use the router's admin page to allow access to your LAN and certain whitelisted networks (for your updates) but restrict all others.

    The benefit of doing it at network level is you block the entire networks, not just DNS or port 80 / 443 (web/ssl) as some solutions to this problem do. Both can be easily circumvented by knowledgable users, but it is much harder for users to bypass a captive portal. Not impossible, but then nothing is!

    The DD-WRT forums should be able to help you do this.

    There may be commercial solutions that achieve the same. Any Layer 7 style firewall technically has the ability to be able to do this - as they can inspect tcp/ip packets and modify / block them in real time according to specified rules. Whether this is functionality is exposed at user level in a particular product is something to discuss with the manufacturer.

    However if you are not allowed to do this to your network due to company policy then you could:

    (i) look into software designed to prevent children from accessing the internet without patent supervision; or

    (ii) look at whether a software firewall would enable to you whitelist / blacklist certain networks. You could whitelist your internal network and specific networks you wish to connect to, and blacklist everything else.

Security VPN vs RDP

I was wondering which is more secure RPD or VPN I realize RDP over VPN is the most secure. I was just wondering what security issues there is with just RDPing from home to a workstation at work is and if I should always use our VPN to do so? Thank you for your time.

  • You're really not comparing apples to apples here as they don't provide the same service. RDP provides you with a terminal session on a remote host, while VPN is an encrypted tunnel between point A and point B which encapsulates higher layer information.

    The biggest security issue with direct RDP to your server is that it exposes your server to the entire world. Anyone with an RDP client can fire it up and, assuming they know your hostname / IP, connect to your server and start trying to log in. At the very least they can cause you some problems by locking out accounts. If you have not taken the measures to harden your server (which is often the case when RDP is directly exposed) then most likely you're just a sitting duck.

    You can help that by configuring RDP to use SSL. And, obviously, using RDP over a solid VPN connection.

    From squillman

How do I integrate a OpenSolaris NAS with AD?

I basically want a OpenSolaris NAS (ZFS goodies) but I'd like to integrate it with AD, so that when I create a new user in AD, his roaming profile is created in the NAS. That means all his ACLs have to work (I know they're compatible), etc.

The tutorials I found don't actually work, so any help would be much appreciated.

  • I'm not sure about whether or not the CIFS sharing feature in ZFS will do this, but you don't have to use that feature. Instead, if you use SAMBA to share the ZFS filesystems, you will have the full AD integration that SAMBA offers. That would be the way that I would solve this problem, and part of the reason is that people use SAMBA on Linux, FreeBSD and many other systems, so that their codebase will have fewer bugs in this area.

    However, if you are following any SAMBA guides, do remember that ZFS works differently from common filesystems. Create one ZFS filesystem (or more) per user, i.e. don't use home directories on a single filesystem.

Is there a way to allow all external ip connections in FreeRadius?

Hello everyone,

I have a problem setting up FreeRadius server to allow connections from all external IP addresses. My hotspot system is based on CoovaAP and custom made captive portal which communicates with CoovaChilli (deployed on router, not on server). Router is connected to modem via ethernet. Captive portal communicates with radius mysql database to verify the hotspot authorization. Everything works until the modem IP address is changed.

Here is a sample from /etc/raddb/clients.conf:

client x.x.x.x {
  secret = 12345
  shortname = name
}

So, the x.x.x.x ip address somehow needs to be dynamic. And i dont know how to sync modem`s external ip address to radius database to make it work.

The question is: how to make freeradius accept connections from all ip addresses or sync modem`s external ip address into radius database.

Thanks

  • Sure, you can do a :

    client 0.0.0.0/0 {
      secret = 12345
      shortname = name
    }
    
    Dan Sosedoff : Thank you man. It solved my problem.
    From mat

Forcing 32bit compilation on Mac OS X 10.6

I'm running Mac OS X 10.6.

How do I force configure to compile in 32 bit mode?

  • You could simply set the CFLAGS and CPPFLAGS :

    export CFLAGS=-m32
    
    export CPPFLAGS=-m32
    

    These flags will simply force gcc to use the -m32 option, which compiles into 32 bits mode.

    So if you only have one file to compile, you can use:

    gcc -m32 myfile.c
    
    From Studer

How to combine RRD files to an overview graph? (rrdtool, collectd)

I have collect runnings which puts performancedate like cpu usage oder network bandwith into rrd files.

My problem is, that i have a single file for each node in the cluster.

How can I get an overview graph for my Cluster?
(for example I have 5 nodes which send each 10Mbit, so the graph schould show 50Mbit)

  • You just specify each file for the DEF: section of rrdtool. See below for a hacky one-off I did as an example. Notice that one of them is foo_kbrandt_foo1 and the other is foo_kbrandt_foo2. So the graph is pulling from two different rrd files.

    rrdtool graph MessagesDeliveredPerMinInfomationStores.png \
    --imgformat=PNG \
    --title="Messages Delivered Per Minute" \
    --base=1000 \
    --height=600 \
    --width=1000 \
    --start='February 13 2009' \
    --slope-mode \
    --lower-limit=0 \
    --vertical-label="Messages Delivered Per Minute" \
    --step 10000 \
    'DEF:a=/usr/local/nagios/var/rra/foo/foo_kbrandt_foo1_delivered.rrd:msg_per_min:AVERAGE' \
    'LINE2:a#FF0000:arf Messages Per Minute\l'  \
    'GPRINT:a:AVERAGE:arf Delivered Average\: %7.2lf %s\j'  \
    'GPRINT:a:MAX:arf Delivered MAX\: %7.2lf %s\j'  \
    'DEF:b=/usr/local/nagios/var/rra/foo/foo_kbrandt_foo2_delivered.rrd:msg_per_min:AVERAGE' \
    'LINE2:b#33FF33:blip Messages Per Minute\l'  \
    'GPRINT:b:AVERAGE:blip Delivered Average\: %7.2lf %s\j'  \
    'GPRINT:b:MAX:blip Delivered MAX\: %7.2lf %s\j'  \
    

Request bursting from web application Load Tests

I'm migrating our web and database hosting to a new environment on all new machines. I've recently performed a Load Test using WAPT to generate load from multiple distributed clients. The server has plenty of room to handle the traffic load, but I'm seeing an odd pattern of incoming traffic during the load tests.

Here is the gist of our setup:

  • Firewall server running MS Forefront TMG 2010 on Win 2k8 server
  • Request routing done by IIS Application Request Routing on firewall machine
  • Web server is a Hyper-V VM on the Database server (which is the host OS)
  • These machines are hefty with dual-CPU's with six cores (12 total procs)
  • Web server running IIS 7.5
  • Web applications built in ASP.NET 2.0, with 1 ISAPI filter (Url Rewrite) in front

What I'm seeing during the load tests is that the requests all come through in bursts. Even though I have 7 different distributed clients sending traffic loads, the requests come through about 300-500 requests at a time.

The performance monitor shows nearly all of the counters moving through this pattern, where a burst of requests comes in the req/sec jumps to 70, the queued requests jumps to 500, the current requests jumps up, the CPU jumps up, everything. Then once it's handled that group of requests, it has a lull for nearly 10 seconds where nearly nothing is happening. 0-5 req/sec, 0 queued requests, minimal CPU usage. Then after 10 seconds of inactivity, another burst comes through, spiking all of the counters once again.

What I can't figure out is why the requests are coming through in bursts when I know that the load being generated is not sent that way, especially considering the various load-generating clients sending traffic all in different intervals with random think time's between each request. Is there something in the layers between Hyper-V or perhaps in the hardware which might cause this coalesce of requests together?

Here is what i'm looking at, the highlighted metric is Requests/sec, but the others critical counter go with it: Requests Queued (which I'd obviously like to keep as close to 0 as possible). Performance Monitor screenshot

Any ideas on this?

  • After a lot more testing and research, I have resolved this issue as being a result of the WAPT load testing tool. There were some settings that, when tweaked, changed this pattern.

    I confirmed that this was a product of the WAPT testing tool once I setup a WAPT instance and used Performance Monitor on both the Web server and the machine generating the load. It is easily correlated to see the packets sent on the Network Interface spike at the same intervals and times that the Requests/sec do on the web server.

    From MaseBase

Debian and Multipath IO problem

Basically the situation is, I have a box running Debian, the box internally has an Intel SCSI RAID controller which is controlling 2 hard drives in RAID1 mode which is where the OS is installed.

Further, I have a QLogic fiber channel adapter that connects the unit to a Fiber Channel SAN.

My process of installation is I'll install Debian to the local drives, and leave the QLogic firmware out of it for the time being.

Then once I get the unit online, I'll install the firmware drivers.

This flops my internal drives from /dev/sda to /dev/sdc, which is a bit annoying, but recoverable. Probably should address these by UUID anyways.

Once I get back online, I have to install multipath-tools (the framework is a multipath framework).

However, once I reboot the machine again, it fails on boot after discovering multipath targets, saying my local drives are busy and cannot be mounted to /root.

Any help in what may be the problem here? Or at least how to disable multipath until after the unit boots and then ignores the internal drives?

  • How are the multi-path targets presented? What are the device names?

    tearman : They're presented as two devices (two LUNs). They're named /dev/sda and /dev/sdb
  • This appears to be a conflict with multipath-tools-boot and the SCSI controller. The workaround is to use software RAID for the time being.

    From tearman

Initialize a MySQL slave server located on another network than the master

Hello, it's my first question here and English is not my native language but I'll try to explain.

I've a master MySQL server with a public IP address running in my provider infrastructure and I want to run a local MySQL slave server in my office that will replicate all the data for testing purpose.

Setting up the replication works perfectly, I created a SSH tunnel to have my slave reading the binlog from the master, here everything is fine.

My problem is to set up the data from the master. Usually when I want to load the data from the master to any slave on the same network, I run the following command on the master :

mysqldump 'master' --master-data=1 | mysql 'slave' 

but here I can't have any IP for the slave because it's located in my office behind a series of NAT routers...

Does anybody have a solution, knowing that I can't stop the master and there is about 50GB of data on it. If you have any other solution to make a 'hot' data transfer from a master to slave I'm also very interested.

Thank you by advance.

  • Assuming you can ssh to the master, How about, from the slave.

    ssh master 'mysqldump \'master\' --master-data=1' | mysql 'slave'

    This will run the dump command on the master, but reload it locally.

    Remiz : Thanks for the answer, it seems that the way to go for me. However, I'm facing a new problem when I use mysqldump through ssh : after few minutes of loading I've an error 2013 : Lost connection to MySQL server during query when dumping... I assume it's more related to my master configuration, I'm looking into this. Have you ever seen this before ? Thank you for your help anyway.
    From gorilla

SBS 2008 - Workstation does not have a DNS Name

I'm trying to resolve some issues with computers not obtaining Group Policies properly, and in the SBS Console windows I noticed that the status of a lot of our workstations is Unknown, so there is very little I can do with them remotely. First part of the question, is it normal to have so many unknown machines (as opposed to Online/Offline), as I would say about 30% of our computers are showing that.

One machine in particular has the status message of, "Unknown - No computer is mapped to this computer account (dnsHostName is empty). I've tried to Google this, but with no luck.

Any thoughts?

  • Pick one workstation & review the Event logs to see exactly what the errors are. Without that info there isn't much to go on. In general GP errors are most often related to DNS or the firewall: static IP's & incorrect DNS settings on wkstns, incorrect DNS settings in the DHCP scope, incorrect or missing SRV or A records in the DNS zone and/or firewall issues on the wkstn preventing connections from the server.

    The one machine that is not mapped may need to be removed and rejoined to the domain if you can't find anything obviously wrong.

    GP issues can be more complicated and running GP Results from GP Mgmt on the SBS box can provide more info on what policies are expected to be applied but you'll want to make sure the fundamentals are correct first.

    From Ed Fries

htaccess to strip WWW from domain *and* any subdirectories

Options +FollowSymlinks
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www\.(.*) [NC]
RewriteRule ^(.*)$ http://%1/$1 [R=301,NC,L]

Am running the above code and it strips the www when you go to http://www.mydomain.com but how do I get it to also strip the www when you go to a subdirectory, etc http://www.mydomain.com/users ?

  • You might want to enable

    RewriteLog "yourLogFile"
    RewriteLogLevel 3
    

    to get some more information about what is going on ...

    From Dominik

How to make my domain name a Name server

Hello, i got a dedicated server and a domain name.

they sent me a didikit login and an ip, now i want to make my own name servers like this : dn1.mydomain.com and dn2.mydomain.com

How to do that and i have only 1 ip

Note : i bought the domain name from another company

Thanks

  • If you go to the company where you bought the domain from, providing they allow DNS tools you will be able to set up your dns settings to create the ns1 which will point to your IP address.

    For example GoDaddy have lots of guides on this. But here is one user create guide

    http://treycopeland.com/2006/08/13/how-to-setup-your-own-nameservers-with-godaddy/

    From Laykes
  • There is one thing that needs to be clarified. Are you just wanting to use your own domain name in the name server delegations and have them point to your provider's DNS server, or do you actually want to run your own DNS server on your own system?

    If you want to continue using your existing DNS provider, it should just be a matter of getting in touch with your domain registrar and registering name servers and providing the IP addresses of the existing DNS servers. Most registrars will allow you to do this, though some will not. Also keep in mind that if your DNS provider changes one (or more) of their IP addresses, you will have to update the name server registrations with the registrar as well.

    If you want to run your own DNS servers, you will need two IP addresses. You could one run yourself and use an outside service for secondary if you only have one IP. There is a very good reason that the RFC's on the topic say that you must have two name servers and that they should be geographically and logically separated on the network. Your IP addresses will need to be static as well.

    If this domain will be used for anything other than learning or messing around with, seek professional assistance. DNS is a core protocol that underlies nearly everything else (web site, e-mail, etc.). If it's not done properly it will cause problems down the line for your business (or your customers, whatever the case may be). If it's just a personal domain, have fun and experiment. It's the best way to learn!

Save As... causes XP MSOffice 2003/2007 Applications to hang

Basically anytime a user on my internal network has an MSOffice application open on a Windows XP machine and does anything that would open a File dialog (i.e. Save As..., Open..., etc.), it causes the program to hang indefinitely and the user ends up losing their work. Anyone have any experience as to what might be causing this? Any advice on how to go about determining why it is happening? How to fix it?

  • Typically that's due to a network share that's not responding in a timely fashion. Have you recently decommissioned a common network drive or printer or some other share? Try it on a machine that has no share connections (that is, a net use from a command prompt shows no entries in the list).

    I've seen badly configured antivirus clients do this as well, where they're configured to scan network shares.

    I've also seen evil Explorer shell extensions (like TreeSize's extension) cause this, where the extension hits network shares and tries to do something like index the entire share.

    From squillman

Ubuntu server very slow out of the blue sky (Rails, passenger, nginx)

I run Ubuntu server 8.04 on Linode with multiple Rails apps under Passenger + nginx. Today I've noticed it takes quite a lot of time to load a page (5-10 secs). And it's not only websites, ssh seems to be affected too.

Having no clue why this may be happening, I started to check different things. I checked how the log files are rotated, I checked if there's enough free disk space and memory. I also checked IO rate, here's the output:

$ iostat

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.17    0.00    0.02    0.57    0.16   99.07

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
xvda              2.25        39.50        16.08     147042      59856
xvdb              0.00         0.05         0.00        192          0
xvdc              2.20        25.93        24.93      96530      92808
xvdd              0.01         0.12         0.00        434         16
xvde              0.04         0.23         0.35        858       1304
xvdf              0.37         0.31         4.12       1162      15352

Rebooting didn't help either. Any ideas where should I be looking?

  • Since your on a 'Linode' (i have one too) your subject to load conditions on the physical host as well. The load on the host will not be reflected in tools like top or iostat. Go to the linode dashboard and look at the host stats. That represents the physical server that your virtual instance is running on.

    Linode lets you request a move if you feel your on a server with another user that is hogging physical resources.

    Please also include your memory stats swap vs cache vs buffers, etc... (the top section of top works well)

How secure is Virtualmin?

How secure is Virtualmin? How does it compare to cPanel or other web hosting control panels? Will using Virtualmin prevent me from being PCI compliant?

  • In what context do you need hosting automation? Is this for internal use, do you plan on selling services based on this solution?

    For shared hosting automation, I prefer Plesk. I find it works very reliably and does not interfere with the underlying OS. Some people prefer cPanel as it allows much more point and click customization, such as choice of PHP version, choice of Apache, etc. Both systems are reliable if you use them as intended, but I like the ability to manage the OS via the default package managers (rpm in my case as I work on Red Hat and the like).

    I've dealt with virtualmin before but found it rather cumbersome and not a polished product. This was a couple of years ago so it may have improved.

    I think it really boils down to usage. If this is to be used for business purposes, managing important business site, selling hosting, I prefer: Plesk cPanel in that order, and then if you must: DirectAdmin Webmin/Virtualmin

    There are a few other hosting automation panels out there but their market share is rather small -- this means a much smaller userbase if you hit an issue.

    Plesk and cPanel both have active forum communities to get help and plenty of companies like ours where you can get paid support.

    http://www.plesk.com/

    http://www.cpanel.com/

    http://directadmin.com/

  • Webmin is where most security questions would come into play, as it is where logins and such happen. Webmin has a very good security history, and its security record is public: http://www.webmin.com/security.html

    It's been several years since the last serious root-level or direct data exposure exploit was discovered, though there have been a few XSS vulnerabilities in the past couple of years. No software of Webmin's complexity will be completely bug-free, including security bugs, but we do take security issues very seriously, and they get fixed quickly. I think Webmin core is about on par with OpenSSH in number and severity of vulnerabilities discovered in the past five years, and I think we all agree that OpenSSH has a really good security record.

    PCI compliance is entirely possible in a Virtualmin system, as nearly everything related to PCI is provided by the OS (so if your OS is CentOS, then you'd take the same steps you'd take with a non-Virtualmin CentOS system; which isn't all that much). We have hundreds of users who have gone through the PCI compliance process.

    Note that the PCI scanner is kinda dumb, and will flag the CentOS (or Debian or Ubuntu) standard Apache package as being old and insecure (and since our Apache build is just a rebuild of the OS-provided package with suexec docroot set to /home, ours also gets flagged)...but the OS vendor applies security patches, which correct security issues. So, you have to add an exception for that particular package. This is well-understood by the PCI folks, and you won't have any trouble from them over this; it'd be more dangerous to build Apache from source, get PCI compliance, and then forget that you'd installed from source. We've seen security issues from this kind of thing a lot over the years, so we definitely recommend you stick with standard packages whenever possible, so that normal updates via yum or apt-get will work (and Virtualmin has an updates notification module on the System Information page to let you know when you have updates, if you aren't running them automatically).

    In short, I believe Virtualmin security is at least as good as the competition, though I'm certain no one has a perfect security record, since the target that the most popular products provide is huge. Webmin, cPanel, and Plesk are all prime targets for black hats because they have root privileges, and run on millions of machines (I know Webmin does, anyway, I'm not sure of the numbers for cPanel or Plesk).

    And, since jeffatrackaid has gone to the trouble to bring up our competitors forums, I'll mention that Webmin/Virtualmin also has a very active community at http://www.virtualmin.com (and if you like the old school mailing list support process, http://www.webmin.com has the hookup).

    Disclaimer: I'm a developer on Webmin and Virtualmin.

    Josh : Thanks @swelljoe!
    From swelljoe

OpenSSL support for Ruby: "Cipher is not a module (TypeError)"

The Problem

Our systems admin needed to upgrade the packages on our CentOS 5.4 dev server to match the packages on our production server. The upgrade affected ruby and/or openssl.

We run a Ruby on Rails issue tracking system called Redmine that is deployed with Passenger on Apache. Everything worked before the server update, but when trying to access the ticket system now, I get the following error:

Error message:

Cipher is not a module

Exception class:

TypeError

Application root:

/home/dev/rails/redmine-0.8.7 

I've been trying so hard to fix this problem but I can't seem to beat it.

I have tried following this guide:
http://iamclovin.posterous.com/how-to-solve-the-cipher-is-not-a-module-error

When I try require 'openssl' in IRB, I do see a true return value. However, I'm still seeing the Cipher.rb is not a module TypeError when accessing the ticket system.

Possibly (probably) related:

I've tried updating Passenger, but when I try passenger-install-apache2-module I see:

Checking for required software...

* GNU C++ compiler... found at /usr/bin/g++
* Ruby development headers... found
* OpenSSL support for Ruby... /usr/lib/ruby/1.8/openssl/cipher.rb:22: Cipher is not a module (TypeError)

Any help?

  • ruby 1.8.7 (2009-12-24 patchlevel 248) [x86_64-linux]

    in ruby source directory

    cd ext/openssl/
    ruby extconf.rb 
    make
    sudo make install
    sudo cp -R /usr/local/lib/ruby/site_ruby/1.8/openssl* /usr/lib/ruby/1.8/
    

    Finding a fix for this took a very long time...

    From macek
  • Thanks a lot! Just having the same, had to compile my ruby myself, what fedora has is 1.8.6

    From Slavik

Ubuntu Slow Server Response; Initial Load; RoR

Hey guys,

I have a rails app sitting on an Ubuntu 9.10 server located here:

http://sandbox.incolo.com

If you hit it initially, it takes about 4 seconds to do its initial load. Once the load has happened the server response nearly instantly. Any thoughts as to why the initial http request is so slow? It definitely shouldn't be loading this slowly the first time especially since its near pure text with zero DB calls.

Here's some info:

Ruby 1.9.1-p376 Rails 2.3.5 Gem 1.3.5 MySQL 5.1