Friday, January 21, 2011

recommend a Satelite internet provider

A part of our team is in Afghanistan and they are now asking me to find an internet solution for their office. The only option I can think of is satellite internet access, but unfortunately I do not have any experience with equipment like that. So I am asking you to recommend some one who can cover Middle East, who is also connected with Europe because that is where our main office is located. Allowing us to easily make payments and agreements.

We do not have a need for a super stable and fast link, a 2MB upload and 512KB download will be just fine for our purposes.

  • For a project I was involved with in Mosul Iraq we used Horizon Satellite Services. It was only a 512kbs/512kbs connection, but I am almost positive they offer up to 2MB service. The connection, though horribly latent (who would have guessed), was very stable. However they are based out of Dubai, but have some sort of presence in Germany. Hope this helps you in some way.

    adopilot : of course it help, Subquestion Did you bought equipment from they or somewhere else, And if is not secret whet is approximate price for this kind of link
    ITGuy24 : I think it was about $10k for the hardware and around $400/month. Not cheap.
    Dave M : Using something like a Riverbed Steelhead WAN accelerator on the link might help. Depends on your traffic but have heard of users with satellite links getting great results.
    From ITGuy24
  • You wont see that kind of upload speed on a satellite internet connection. You can get about a 10th of the download speed for upload. The ones I have dealt with are incredible slow and it was sometimes easier to just dial-up into the site. The latency is what makes it so slow.

    ITGuy24 : Your right about latency being an issue, but so long as the traffic doesn't timeout it's not a huge issue. As for speed I know Hughes Net offers up to 5Mbs (could be higher now) and most satallite connections upload speed can be picked. It's not uncommon to have a fast upload then down.
    xeon : Hmm. Thanks for the information. Didn't know you could actually get decent upload. Looks like a max upload would be 10Mbps.
    From xeon

Changing Permissions on a Windows Share from Linux?

I haven't the slightest idea is this is possible, though I believe it is not.

I would like a Linux client to be able to change permissions on folders on a Samba share on an NT (or Windows 2000, or something) server. Is this possible, assuming the user who is accessing has sufficient privileges?

If so: From the command-line, how would this happen on OSX and on Linux?

Note: For clarity, the server with the share is Windows. The client accessing the share is Linux. Something other than Samba could also be considered, if someone has a better idea :)

  • There are programs that can do this such as RTMSHARE from the Windows Resource Toolkit or SetACL (search sourceforge), but neither would run on OS X or Linux natively. You could investigate running them in a virtual environment on OS X or Linux. Alternatively, if the Windows machine you are attempting to modify allows remote login, you could install it and run it remotely.

    quack quixote : alternately these tools might run under WINE (instead of a full virtualmachine).

Apache ProxyPass not serving php properly

I've configured Apache to proxy all requests to foo.bar.com except for the the alias /bazbar. The proxy portion of the configuration works and the proxy pass portion works except that it serves the index.php as a plain text file.

How can I get Apache to serve the php files properly? Here is my configuration.

<VirtualHost *:80>
  ServerName foo.bar.com
  Alias /bazbar /var/www/bazbar

  ProxyPreserveHost On

  ProxyPass /bazbar !

  <Location /bazbar>
    SetHandler None
  </Location>

  <Location />
    Order allow,deny
    allow from all
    ProxyPass http://localhost:8080
    ProxyPassReverse http://localhost:8080
  </Location>
</VirtualHost>

*note I have confired that php is configured properly because when I go to http://localhost/somescript.php the php renders properly

  • If you visit http://localhost:8080/ on that machine, do you get index.php served as a text file or does it run on the server? In the case of proxy, apache will simply take what it got, and feed it back to the client. My first look would be there.

    adam : http://localhost:8080 is a separate application. When I go to http://localhost/bazbar/index.php the php script is rendered properly. The problem is that http://foo.bar.com/bazbar/index.php serves index.php as plain text. Does that help clarify?
    Richard June : I misunderstood. I follow you now. Then I think the section is your problem, If you read this page: http://httpd.apache.org/docs/1.3/mod/mod_mime.html#sethandler You'll notice that you're disabling all handlers for that Location. If you remove the SetHandler directive, I think you'll find that it works.
    adam : That was it. I replaced SetHandler None with Order allow,deny and allow from all and that worked. Can you submit another answer with that suggestion so I can accept that answer? I want to make sure the answer I accept is clear for others who read this question.
  • Look at the <Location /bazbar> section. SetHandler None disables all handlers for that Location. You need to remove that directive in order for it to work as you expect.

What possible events could cause a MySQL database to revert to a previous state?

A client of mine recently had a strange event with their MySQL database. Several days ago, one database suddenly "went back in time". All the data was in the state it was in several months ago. Even most of the .MYD and .MYI files had timestamps from November.

Fortunately, the server is not in production yet, but we need to understand how it happened so it doesn't happen again.

I'm not a MySQL guru, but I couldn't think of a scenario that could cause the database to rewind like that short of restoring from a backup. What could have happened here? Where should I look for clues?

(Server is FreeBSD 6.4)

  • Nothing showing up in the logs for the time period where the reversion was believed to have happened?

    No one ran an operation that deleted records after a certain date?

    Are you looking at raw data from the database or from an application that may be filtering the output so you don't see things after a certain date?

    Any data restore from backup run, anyone playing with filesystem snapshots?

    Any scripts running to make backups that could have hiccuped?

    Who else has access to that directory with rights to copy/modify programs? You said it wasn't a production database, so are developers playing with the server, with access to version control that could have done something to the file?

    Filesystem errors showing up? Was it just the database files that were affected, no other system or user data? Logfiles?

    Bart Silverstrim : This was just a quick list of things that would go through my mind, but I realize you said the datestamps are also backdated, so I'd suspect that either someone restored the database directory, copied files back to he directory to "fix" a problem, or someone did something with filesystem snapshots to revert the system to a previous state, but those are my suspicions. Someone else more well versed in db's may have another idea, but this sounds more like someone did something accidentally or is covering something up and has admin access to the machine.
    Bart Silverstrim : At any rate as far as I know databases don't just "roll back" complete with datestamps in any database transaction or from the database engine, and it would have had to happen in a way that it doesn't cause issues with the database engine running in memory. I'm no DB admin but I would think the SQL engine would be unhappy having the active database files yoinked from under it as it was running, and so the logs should have the fallout if the database doesn't crash (and it should have had some filesystem locks since it would be open, I'd think...)
    Bart Silverstrim : Oh, and I'd also start automating a few scripts to pull up records on who was logging into the machine for the past few days to see who was in it when, and audit that with when it approximately happened to narrow down who may have done it, even accidentally. You can also grep for commands using sudo if your admins aren't using root by default. If not the case you might need to change policy for a better audit trail, or at least have a list of who has access to alter database files and such on the system.
  • # cat /etc/fstab
    # mount
    

    Maybe mysql database, along with, for example /var, was copied to separated disk and then mounted on the top of existent /var(/db(/mysql))?

How to configure vpn

What is a VPN and How it will be established ?

How do I find and delete a cron job?

I created a cron job for a wordpress plugin that i no longer want running. I am not very good at navigating around in unix. The cron job was the following:

*/10 * * * * /usr/bin/wget -O /dev/null http://ADDRESS_OF_THE_FILE >/dev/null 2>&1

Does anyone have pointers how to navigate to this cron job and delete it?

Thanks!

  • Run command

    sudo crontab -e  
    

    It will open cron jobs in text editor. Remove your line and save the file :)

    From Temnovit
  • run in console:

    crontab -e
    

    then you will get crontab opened with an editor, simply delete the line there, save the file and quit editor - that's it...

LogParser: Find all log rows where session cookie = the session cookie found in a row that had an error

Looking for examples of integrating LP into this workflow:

  1. A SQL server event indicates IIS tossed an error.
  2. SQL parses the body of the error message and generates an LP commandline that queries IIS logs to collect more information.
  3. An email is dispatched to tech support with a link to an html (or .aspx) page that would run and present LP's output.

So i think the question boils down to:

How do i hook LP upto IIS7?

EDIT: Ok...re-boiling the question. When an IIS exception is triggered I want to see all the other log rows where the session cookie is the same as the session cookie found in the error event. Yes, I could live with polling IIS logs via a scheduled task - that takes ELMAH/SQL out of the equation.

But now the question becomes: Find all log rows where session cookie = the session cookie found in a row that had an error

  • You could setup a reoccurring Windows job that interrogates the IIS logs with LP every 10 minutes or so and use the checkpoint so it only looks at the unchecked parts of the logs. It's not integrating with IIS, but might solve the problem.

    From

Dialog Connect to Server in Gnome Nautilus contains no items

I have installed Gnome Nautilus 2.28.4 on my Debian 5.0 box and a dialog File -> Connect to Server -> Service type is almost empty:

alt text

How can I fix that?

  • Trying to figure out the same thing. I will post a comment when I do.

    abatishchev : @86me: Hi. Got any results?
  • exact same problem after upgrading from karmic to lucid. subscribing

    From
  • anybody found a sullution

    From Stevo
  • sudo apt-get install gvfs-backends
    
    From Stevo

Free webhost that lets you access server logs?

What's a free webhost that lets you access server logs of your site?

  • usually any webhost with cpanel or some app like that lets you access the {apache|ftp|dns|other service} log of your account

    if that was the question; if you want to see the logs of the operating system, I guess nobody will show them to you

    From petre

IIS7 How to serve a default 404 image when an image is requested?

How can I get IIS7 to serve a default image instead of a 404 html page, when an image is requested?

e.g.

request for: http://www.example.com/myImage.jpg

myImage.jpg does not exist, so we deliver another image instead?

  • Consider URL Rewrite or something like that, although you'll need to do some extending for it to know if the jpg exists or not.

    You can use customErrors but that will affect all content type. Additionally, you could add your own HTTPHandler that watches for .jpg files, confirms that they don't exist, and serves up a default image it doesn't exist.

    Lucifer : Ok thanks, that is good advise and I had considered them, but was wondering if it was possible without rolling my own - thanks
    Scott Forsyth - MVP : No, I don't believe so. You can point all .jpg images to a single image, or all 404's to a single image, but there isn't something built-in that allows you throw a custom 404 just for .jpg files.

domUs hang in start up, Debian Lenny

My server box has a hardware problem, so I unracked it and brought it home. While it was home, I figured it was time to upgrade from Etch to Lenny. Unfortunately, now that I've done so, all my domUs hang while starting up. I even tried making a new domU using

xen-create-image --lvm xen-space --hostname=test1 --size=8Gb --dist=lenny --memory=512M --dhcp

and when it starts up, it hangs in startup as well. The last message on the console is

Starting periodic command scheduler: crond.

The last thing in xend.log is

INFO (XendDomain:1165) Domain test1 (7) unpaused.
[ 5526.429198] blkback: ring-ref 8, event-channel 8, protcol 1 (x86_32-abi)
[ 5526.441788] blkback: ring-ref 9, event-channel 9, protcol 1 (x86_32-abi)
  • it's not hung, it just isn't outputting anything to the console. you need to add the following to your kernel command line

    console=hvc0 xencons=tty
    

    you can then fix it by editing inittab I believe...

    http://wiki.debian.org/Xen#Nologinpromptwhenusing.60xmconsole.60

    Paul Tomblin : Am I misreading that? I thought it was saying that hvc0 was for the serial console, not for the console you get when you do "xm console"?
    Justin : I have no idea what the difference is, if there even is one.. all I know is that specifying hvc0 fixes the problem :-)
    Paul Tomblin : Yeah, the wiki wasn't too clear, but it turns out that I had to change the `1:...*getty tty` on all the domUs as well, as well as installing udev on all of them so that ssh works.
    Justin : ah yeah... the getty stuff is for the domU, you shouldn't have to touch the dom0 unless that was having problems too.
    From Justin

How To Suppress Apache Error

I've gone through the trouble of blocking a number of bots that are trying to crawl our site. The issue now is that the following error is taking over the apache error log:

client denied by server configuration

I was hoping that an Apache expert out there can tell me how I can suppress the error message for this specific issue from being written to the error log.

Thanks in advance for your help!

  • grep -v

    Really. I mean it. Why manipulate the logs at time of writing?

    Edit 1

    tail -f -n100 /var/log/httpd/error_log | grep -v 'client denied by server configuration'

    If you really want to prevent the error from being written to disk, you can pipe your logs through a script. More details here:

    Apache docs

    Russell C. : @Warner - Fair enough. Any ideas how I can change the following command I currently use to view the log file so that it suppresses those errors? tail -100 /etc/httpd/logs/error_log
    Russell C. : @Warner - Thanks! That seems to produce something slightly different since it first grabs 100 lines of the error log and then only shows non-matching lines. Is there a way to instead display the last 100 lines that don't match?
    Warner : Try this: `grep -v 'client denied by server configuration' /etc/httpd/logs/error_log | tail -n100`
    Russell C. : @Warner - works great. Thanks for the help!
    From Warner

Access Denied with System Account in Central Admin

Hi,

I have installed Sharepoint 2007 with Complete/Server Farm settings. When I supplied the credentials for the account in the configuration wizard (installer), I supplied another loal account (the name of the account) and the password. This is an admin account on the OS/Windows Server 2008, and is labelled as System Account in Sharepoint Central Admin.

I specified just the name of the account as it is local. Is this perhaps the cause of the error?

When I go Operations > Servers in Farm (in Topology) and select the current server, I get an access denied error (I am doing the admin tasks once the server is setup).

How can I resolve this? Do I need to setup AD?

Thanks

  • I think the account for Central Admin has to be the same as the farm account - check IIS and see what account is assigned to the application pool? If this is a dev machine, you do not need to setup AD and can use the same admin account for everything.

    blade : Hi, Sharepoint Central Admin has WIN-H54AGVCYSVK\SPAccount and OfficeServerApplicationPool has NetworkService (the other 2 pools are defaultAppPool and Classic .NET AppPool).
    blade : I tried making the Sharepoint Central Admin account NetworkService but when I went into Central Admin I got an IIS error (couldn't read config file).
    IrishChieftain : Add the correct account (system) to the Central Admin application pool, recycle and try again.

SSHd restriction per user basis

I need to restrict certain user(s) so that they can only SSH in using ssh keys and other users can login using key or password.

an example:

i'd like for root user to be able to login remotely (through sshd) using key, so no password would be accepted (even if password is right)

and for other users (everyone on the system) they can log in using key and/or password

how would I do that?

  • Set up ssh as follows:

    nano /etc/ssh/sshd_config
    
    AllowUsers username1 username2 username3
    

    Restart SSH

    Then provide the keys to those who you would like to avoid using passwords.

    ssh-keygen is used to generate that key pair for you. Here is a session where your own personal private/public key pair is created:

    #ssh-keygen -t rsa
    

    The command ssh-keygen -t rsa initiated the creation of the key pair.

    I didn't enter a passphrase for my setup (Enter key was pressed instead).

    The private key was saved in .ssh/id_rsa. This file is read-only and only for you. No one else must see the content of that file, as it is used to decrypt all correspondence encrypted with the public key.

    The public key is save in .ssh/id_rsa.pub.

    Its content is then copied in file .ssh/authorized_keys of the system you wish to SSH to without being prompted for a password.

    #scp id_rsa.pub remote system:~/.ssh/authorized_keys
    

    Finally lock the account (Key authentication will still be possible.)

    # passwd -l username1
    
    alexus : that's not what i'm looking for. let's say i want root to be logged in using keys only and other users can be logged with key or password
    Patrick R : then don't lock the account with passwd -l username1
    From Patrick R
  • What I would do is to set /etc/sshd/sshd_config such that:

    PermitRootLogin without-password
    

    just for extra security and to avoid having the root password locked (it would only allow root to log in using a key)

    I would instead use AllowGroups instead of AllowUser, as for me it would be more convenient to add users to a group rather than to sshd_config but that could depend on your personal preferences.

    From golan
  • I think what you want is "Match User". You use it to match a username, then indent a series of config settings that apply specifically to that user.

    Match User Joe
      PasswordAuthentication no
    
    Match User Jane
      PasswordAuthentication yes
    

    I use this to set up chroot SFTP-only access sometimes for clients.

Does crontab file automatically bash a text file?

Hey there,

I have a crontab job setup. In the crontab file, I have a path to a text file. My text file has a wget command (which in turn, executes a PHP file). If the crontab file just has the path to the text file, will it automatically bash (execute) that text file? Or do I need to prefix the path to the text file with bash?

Thanks all! -Steve

  • If the file is executable (check if it has x in ls -l, if not, then use chmod to set the executable bit) and the first line contains #!/bin/bash then it will be interpreted in bash.

    The other option is, as you suggest, to pass it as an argument to bash:

    /bin/bash /path/to/your/file.sh
    
    LookitsPuck : Thanks mate! I will take care of this now.
    LookitsPuck : So, my script should do the following: 1stline: #!/bin/bash 2ndline: wget insertweburl here Is that correct?
    Mikael S : Correct. And use `chmod +x ` to set the executable bit. Also, you should probably use the full path to wget, since cronjobs usually run a "bare" environment where `$PATH` might not include the directory where the wget binary resides.
    From L.R.

How To Treat Any File As PHP In Windows IIS 7?

I want .any file (could be .this or .that etc.) to be treated as a .php file and parse .php code on my windows 2008 server running IIS 7.

How would I go about doing so?

  • It was a while since I did this, but the process last time was something like this:

    1. Start>control panel>administrative tools>Internet Information Service

    2. Choose the website you want to change (typically the default site) and then open the properties.

    3. Home Directory tab, make sure execute permissions are set to "Scripts and Executables" then go to Configuration

    4. Here you click add, then you add the extension for all request types. Give the path to php.exe.

    You might have to restart iis afterwards.

    darkAsPitch : Thanks pehrs , but as for number 2 - open properties? Do I right click? I don't see any "properties" value listed anywhere for any site - default or not ?
    darkAsPitch : Or 3? What do you mean by the home directory tab? I'm using IIS 7.
    pehrs : Oh, sorry, If you are on IIS 7 it's something like Features View panel, then Handler Mappings see http://technet.microsoft.com/en-us/library/cc771240%28WS.10%29.aspx
    From pehrs

sql server 2008 scalability vs budget

I need to buy new server dedicated for Navision 2009 database.
It's 50GB large and growing around 10GB a year.
16 user connected all day
System: Windows 2003 R2 Standard 64-bit
SQL Server 2008

Because of budget I must choose between:

  1. Quad Core Xeon 3,2 GHz and 2 GB RAM
  2. Quad Core Xeon 2 GHz and 4 or 6 GB of RAM

Which should I choose?
Which will give me better performance: more RAM or more CPU speed?

  • More RAM in SQL should give you an increased performance as long as it is allocated to the SQL server in question, SQL Server should "seize" a lot of that RAM on the startup of the service.

    Unless you are doing a lot of computations on the recordsets returned then having excess CPU will not increase SQL Server's performance. I would recommend the second option with 6GB of RAM. If you use parameterised queries this should allow a lot more of the queries that are executed more often to have their result sets stored in RAM rather than having to be "extracted" from disk.

    Of course, RAM or CPU is not the only bottleneck in SQL server, look at your disk layout and RAID levels, please see http://serverfault.com/questions/118767/standard-database-backup-procedures/119123#119123 for another post on here about how I have configured my servers, this is by no means expert advice but I have found it to be performant in my environment.

    Make sure to do regular backups and remember it's not a valid backup until you've tested your restore!

    jl : Dan makesa good point regarding the disk subsystem. Too many people focus on the front end and ignore the disk subsystem. SQLIO and SQLIOSim can help give some measure of behavior. The first is to put more of a pure load on it, the second simulates SQL Server IO patterns.
    Dan : +1 for actually mentioning tools to test this with!
    SeeR : My Data and Log Disks are "Express IBM 146 GB 2.5in SFF Slim-HS 10K 6Gbps SAS HDD" so I hope they are pretty good. Your backups are similiar to mine - full once a day and log every hour. Thanks for clearing my dilemma about RAM vs CPU. Once on twitter I saw that @codinghorror wrote about how he was suprised when he saw how speed grows when he is changing processors on stackoverflow servers. This is why I asked this question.
    From Dan

nslookup finds server name from ip, but whois claims the name has expired?

I noticed I was getting udp traffic on random ports from an unknown ip address that was definitely not on my domain today. When I looked up the ip using nslookup, it returned a name. But when I did a whois on both name whois complained that the name had expired and didn't give me back any information. So I tried the ip on whois, and whois couldn't find it at all.

How is this possible?

  • It is possible that the domain has expired very recently and the DNS servers are just returning the name they have cached. If this is the case, the behaviour shouldn't continue for long; 48hrs is usually the max.

  • Keep in mind that UDP traffic is not stateful, so these packets could be forged.

    From Zoredache

Does exist a method to list all jobs in crontab for all users on a system

The title says everything.

I'm using Fedora 11.

  • cat /var/spool/cron/*

    Many distributions have additional system crons configured via /etc as well. For example, CentOS has files in /etc/cron*

    Let me know if you have any further questions.

    From Warner
  • I don't think so

    You could do something like this:

    for crontab in `ls /etc/cron.*/* /var/spool/cron/* /etc/crontab`
    do
    echo $crontab
    cat $crontab
    done
    
  • There is no native command to do this, but you can use a simple bash oneliner like this:

    for u in $(cut -f1 -d: /etc/passwd); do sudo crontab -u $u -l; done
    

    The above would read out all user entries in /etc/passwd and list their appropriate crontab entries. sudo usage is required, since you'd need superuser privileges to access another user's cron.

    Luc M : Perferct! Thank you very much.
    ktower : This assumes your users are all listed in /etc/passwd. If you use a different naming service, say for example LDAP, to define your user namespace, they will not appear in /etc/passwd. You might instead consider using "getent passwd" in place of the cut command above.

SQL Server 2005 Performance Problems

We're in the process of migrating to a beefy new SQL Server (12GB RAM, 2 4-Core CPUs, 12 x 15k rpm drives, Gbit network). We have the drives divinded into 4 partitions for the OS, Data, Log and full text index files.

Here's the problem: I'm running a job that exectutes 36k searches (a combination of table and fulltext joins) from a single threaded console application. Only rather than taxing our server, the server registers about 5 to 7% CPU load, not the +-60% that we were getting on our old server.

From the dashboard reporting, the only waits we're receiving are the occasional network IO wait - but it comes and goes. So it seems like SQL Server is throttling our connection?

Can anyone shed some light on this?

Thanks
Jon

  • Well, you could try running Perfmon and SQL Profiler to get a lot more insight into this. But please tell us a little more about your drive config, firstly. You say you have 12 drives divided into 4 partitions. Does that mean that you made one big RAID array and cut it into 4 actual OS-level partitions, or did you make 4 RAID containers, each with one OS partition? The former is a good recipe for bad performance.

    Jon Leigh : Hi The we have 4 separate raid arrays, Raid 1 for the OS, log files and full text index and raid 5 for the data files
    mfinni : OK, that rules out a biggie for bad performance. RAID-5 isn't optimal for writes, but since your problem is reads, that's not a big red flag either. You're gonna have to dig into Perfmon and Profiler - and at this point I will stop being helpful, because I don't know the best counters to watch, off the top of my head.
    Jon Leigh : Thanks anyway :)
    From mfinni

How to increase the Hard disk space in Vmware workstation with MAC Leopard

I am using pre made image of MAC OS Leopard 10.5.7 using an iso called darwin.iso as cd drive which boots the OS.

Now Everything is working fine but the Hard -disk is only 16GB. I have increased the Hard disk of Virtual machine in vmware but MAC OS hard disk still shows 16GB.

is there any way to increase that space.

DO i have to chnage anything in that ISO ore dit the ISO image

  • You'll have to resize the filesystem of the virtual disk inside MacOS. For this, you have different possibilities:

    • First, create a second virtual disk and copy the data over to the new disk, maybe with the involvement of Time Machine or Carbon Copy Cloner
    • Second, use an external tool like iPartition
    • Third, use this undocumented method (and don't forget to backup!)
    Master : so you mean to say that if vmware is showing HD of size 100Gb and mac is only showing 16GB , it means the extra space is there but has not been defined in mac
    SvenW : Exactly. It's the same with every filesystem: When you create it, it has a defined size, which will not change when you change the size of the underlying partition. Oh, and I added the link for the third method.
    Master : thanks buddy , i tried evrything but was not able to increase the size, but i added new virtual hard disk using vmware and it came alright there
    From SvenW
  • i need to increase network connection in vmware how do i this work

    From mahnaz

How can I make VMWARE Lab Manager to stop controlling a VM

Hi guys

We have a VMWARE system that was set up a year ago for one of our projects. It consists of an enclosure with 3 servers (2 ESX 3.5 servers and a VCenter server) and a storage module.

We also have lab manager 3 installed, and the LM is controlling a number of the VMs that run on the ESX.

We have no need for the LabManager anymore and wish to get rid of it. Problem is, it's controlling a couple of mission-critical VMs that we cannot afford to lose and now it appears that those VMs are somewhat shakily backed up (we weren't able to do a "bare metal" restore to another VM from backups).

I was digging around in lab manager for a couple of days and i just don't see any option to make lab manager "release" the virtual machine so that it can be directly controlled by the VCenter server. As soon as I "undeploy" a configuration or a machine in LM, the Vclient loses sight of it, and I found no way to regain it.

Now i'm afraid that if i simply detach the hosts in LM and uninstall it, the VCenter wouldn't be able to use those critical VMs, and then, as we say in ebonics - 'we be screwed'...

Anyone has any advice?

p.s. i combed over the installation guide and the user guide for LabManager. It makes everything seem so easy and safe, but gives no specifics or contingency plan advice for what happens if things dont go by the book...

  • Have you tried making a Clone of one of the VM's? Then shut down the original managed VM and just power up the clone.

    In a worst case scenario you could use VMware Converter to make another copy of the VM that is not managed by Lab Manager.

    That said it seems like it should be possible to release a VM but I'm not familiar enough with Lab Manager to be able to say whether that is true or not.

    V. Romanov : Well, the saga is over. I don't know how, but the LabManager lost contact with one of the ESX hosts, and all the machines that were on it became "unavailable" to the LM, and somehow became regular VC machines. The other machines were less important and we just reinstalled and reconfigured them. I'm not really happy about how it came out (if we have to do this again we will be in the same hole) - but it works now, and i guess it's good enough.
    From Helvick
  • IF you just do a power down of the VM but leave it deployed it can then be controlled through vcenter client.

    From jjournay

Mounting case-sensitive shared folder in VirtualBox

I have an OSX host with a ubuntu VM trying to mount a shared folder.

I'm using the options:

sudo mount -t vboxsf -o uid=1000,gid=100 pg ~/pg-host

The folder mounts fine, however it appears that the mounted directory is case-insensitive, even though my OSX drive is formatted case-sensitive.

Are there any options to control this behavior ?

  • Are you sure it's formatted case-sensitive? By default HFS+ volumes are case-preserving case-insensitive. Unless you performed a custom format on your disk that is likely the case.

    rhettg : Yeah I custom formatted the disk. My fear is that virtualbox doesn't understand that's a possibility and just assumes HFS is case-insensitive.
    packs : Anecdotally, (and slightly OT) I've had bad luck with formatting HFS+ as case sensitive. I came across several app bundles that assume case insensitive, and would fail strangely.

Get all DNS records on remote server?

Is it possible to get all DNS records off a remote server?

  • Normally you can't, but if the DNS server allows zone tranfers to anyone (unlikely) you can do it.

    John Gardeniers : +1 That's the definitive answer. Without a zone transfer you cannot even know if you have obtained all the records for a given zone.
  • Try Zonetransfer on unix shell:

    $ dig axfr sld.tld. @nameserver
    

    get a list of nameservers delegated for your zone:

    $ dig soa sld.tld. +trace
    
    From ZaphodB
  • Zone transfers are always available to slave nameservers. This will be at all the listed nameservers except the master nameserver. Higher security configuration hide the master nameserver and may not allow public access to it.\

    Once upon a time zone transfers were frequently available to everyone. Today's best practices discourage allowing them to everyone. This helps limit information leakage. The above axfr command will get the data if it is available.

    From BillThor

Cannot print from network printer while on terminal server

I have two (local) network printers - HP PSC 750xi & Samsung ML-1710. While connected to the remote Terminal Server I am able to print on the HP. However, when I try to print on the Samsung, the printer activates as if it's going to print and then does nothing. Anyone know what I need to do in order to print on the Samsung? Thanks.

EDIT:

The terminal server does have the appropriate drivers for both printers -- in the case of the Samsung it is the "Samsung Unified Print Driver."

When I print, the printer queue takes the target document and moments later removes it as if it was successfully printed. When I check the event viewer (of the TS) there is a notice that states:

"Document owned by Dan was printed on Samsung Universal Print Driver in session 1 via port TS001."

But unfortunately, the printer does not actually print any document.

  • Ensure the correct drivers are installed on the Terminal Services Server.

    Dave M : +1 This is usually the issue
    Dan : The correct drivers are installed on the TS. I've edited my original post with more information -- please see above.
  • If your Samsung installed as network printer on client (local printer with TCP/IP or other network port assigned for printing) - try to implement changes in Windows client registry to force ALL client ports redirect to terminal session as described here

    From Sergey

Linux how to force quit the process by root

I have run the command to backup 7 accounts and then i want to quit that command while its running. How can i quit from command line

I want that it should quit backing up all accounts not just current account and then i have to press again untill all accounts open

  • Ctrl+C will kill the command if you ran it in the foreground from the same shell.

    kill -9 will destroy everything in its path if you are killing it from a different environment, or you ran the command in the background.

    Master : i tried ctrl c but it then come to shell but processing is still going on
    Dennis Williamson : Please don't do `kill -9` when it's not necessary. http://sial.org/howto/shell/kill-9/ and http://mywiki.wooledge.org/ProcessManagement#I.27m_trying_to_kill_-9_my_job_but_blah_blah_blah...
    From MattC
  • Use ps aux command to determine process which runs next backup and kill it, for example with kill -9.

    Dennis Williamson : Please don't do `kill -9` when it's not necessary. http://sial.org/howto/shell/kill-9/ and http://mywiki.wooledge.org/ProcessManagement#I.27m_trying_to_kill_-9_my_job_but_blah_blah_blah...
    From radious