Wednesday, February 15, 2023

Ubuntu and CentOS VM's Disk extend

For ubuntu(tested on ubuntu 20.04) disk extened:

After you added the extra space to the disk run: echo 1 > /sys/block/sda/device/rescan (must run as root)
enter fdisk: sudo cfdisk
you should see the free space in green
go to the device you want to take the free space(usally the last one), and choose resize
now your device should be with the extra space, choose write and write yes not only y to save and than exit
run sudo pvresize /dev/sda3 (dev/sdaX number is from the previous step)
run sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv (run lvdisplay to get the lv path)
run sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv (run df -h to get the filesystem name)
check with df -h you have the new space added


CentOS(tested on CentOS 7) Disk extend:

check if there is unusable space: cfdsik /dev/sda
if you added disk space but you dont see it in the OS, run: echo 1 > /sys/block/sda/device/rescan
than check again in cfdsik /dev/sda
fdisk /dev/sda
view current partitions: p
delete the last partition: d and than the number
re-create the partition with a bigger size: n, choose p for primary and the number (same as in the last step)
configure the created partition in the last step, partition table same as it was: t and the number (same as in the last step), write 8e for the Linux LVM table
you can check with p all the changes, that it looks exaclty the same but the end sector of the last partition is bigger
press w to save the changes and than reboot the machine
now we need to extend the LVM physical and logical volumes
pvdisplay
pvresize /dev/sda4 (/dev/sdaX is the number from fdisk)
lvdisplay to show
lvresize /dev/mapper/centos-root /dev/sda4
*you can get the name from df-h
*/dev/sdaX is the number from fdisk8
next check the type of the logical volume with the df -Th command and check the type column of the /dev/mapper/centos-root
for xfs use:
xfs_growfs /dev/mapper/centos-root

Wednesday, July 17, 2019

Install ELK 7.2 on Ubuntu 18 for SYSLOG

Ubuntu 18  and ELK 7.2


sudo apt update && apt -y upgrade


#elasticsearch

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

sudo apt-get -y install apt-transport-https

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

sudo apt-get update && sudo apt-get -y install elasticsearch



sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service



#Kibana
sudo apt-get install kibana

nano /etc/kibana/kibana.yml # set server.port and server.host(server ip address or name)

#enable port 80 to be used by node.js:
setcap cap_net_bind_service=+epi /usr/share/kibana/node/bin/node

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service



#logstash

apt install openjdk-11-jre-headless

sudo apt-get install logstash

systemctl enable logstash

#create new file

nano /etc/logstash/conf.d/logstash.conf

#add:

input {
    tcp {
        port => 514
        type => syslog
    }
    udp {
        port => 514
        type => syslog
    }
}


output {
  elasticsearch {
    hosts => ["localhost"]
  }
#    stdout {codec => rubydebug }
}


#enable port 514 to be used by java
setcap cap_net_bind_service=+epi /usr/lib/jvm/java-11-openjdk-amd64/bin/java

logger -n 127.0.0.1 system works

Sunday, April 15, 2018

ELK Stack Install on Windows Server 2008 R2

#Copy entire install directory to c:\install\ , this guide assumes the following path c:\install\logserver

#Check Firefox or Chrome is installed on the system and Notepad ++
#Open cmd as administrator , than copy and paste the uncomment commands

set JAVA_HOME="C:\Program Files\Java\jre1.8.0_65"
setx /M JAVA_HOME "C:\Program Files\Java\jre1.8.0_65"

#install java
#C:\install\logserver\jre-8u65-windows-x64.exe /s
C:\install\logserver\jre-8u65-windows-x64.exe





#install node.js
#msiexec /qn /l* node-log.txt /i C:\install\logserver\node-v4.2.2-x64.msi
C:\install\logserver\node-v4.2.2-x64.msi



#extract ELK stack
mkdir c:\logserver\
cd c:\logserver\
unzip C:\install\logserver\elasticsearch-2.4.1.zip
cd c:\logserver\
unzip C:\install\logserver\logstash-2.4.0.zip
cd c:\logserver\
unzip C:\install\logserver\kibana-4.6.2-windows-x86.zip
cd c:\logserver\
unzip C:\install\logserver\nssm-2.24.zip

copy "C:\install\logserver\logstash-341.conf" "C:\logserver\logstash-2.4.0\bin\logstash.conf"


#configure elasticsearch

cd C:\logserver\elasticsearch-2.4.1\bin

plugin install file:///C:\install\logserver\elasticsearch-kopf-maste-201.zip

service install

service manager

# on service manager popup change to automatic and start the service
# Check elasticsearch and kopf running: "http://localhost:9200" "http://localhost:9200/_plugin/kopf"
# configure first index using kopf, go to "http://localhost:9200/_plugin/kopf", choose more -> index templates , enter template name "logstash" , copy content of
# indexTemplate-softov.xml to body field and press save




# create service for logstatsh and kibana
cd C:\logserver\nssm-2.24\win64\

nssm install Kibana-4.6.2


#Path: C:\logserver\kibana-4.6.2-windows-x86\bin\kibana.bat
#Arguments:




nssm install Logstash-2.4.0


#Path: C:\logserver\logstash-2.4.0\bin\logstash.bat
#Arguments: -w 2 -f logstash.conf
#***** -w number of cpu cores

#start kibana and logstash services
services.msc

#check Kibana is running "http://localhost:5601" (takes a few seconds to laod)




#Send first syslog messege with powershell:
#copy paste  send_syslog.ps1 contet to powershell bash
powershell


# Refresh kibana page
# In kabana press "Create" button (should be now green instead of gray) to create the default index





# if still in powershell bash type:
exit

#check indices, copy the address in the browser
http://localhost:9200/_cat/indices?v


#install curator
#msiexec /qn /l* curator-log.txt /i
C:\install\logserver\elasticsearch-curator-4.1.2-win32.msi

#copy curator config files
copy "C:\install\logserver\conf.yml" "C:\Program Files\elasticsearch-curator\"
copy "C:\install\logserver\delIndex.yaml" "C:\Program Files\elasticsearch-curator\"

#Create Schedualed task
schtasks /create /tn Curator /tr "\"C:\Program Files\elasticsearch-curator\curator.exe\" --config conf.yml delIndex.yaml" /sc daily /st 01:00:00 /ru SYSTEM /rl HIGHEST /NP /v1

# you can test curator with del_all_indices.yaml content, when you check indeices in the browesr above you can see one index is created named logstash-*date
#you can temporary change delIndex.yaml file with del_all_indices.yaml to test if the schedual task works and deltes the index above

#to manage how many days curator will delete old indexes, edit the file C:\Program Files\elasticsearch-curator\delIndex.yaml where it says "unit_count:" thats how much days back to delete.



#Move indexes to other drive (if C drive to small):
#edit with notepad ++
C:\logserver\elasticsearch-2.4.1\config\elasticsearch.yml
# uncomment this line and change destination
# path.data: /path/to/data

#restart elasticsearch service



Tuesday, August 29, 2017

Windows 10 Enterprise Build 1703 Cannot Change Time zone


To change time zone , run this command in an elevated command prompt:

tzutil /s "China Standard Time"
 
to find your own time zone, run this command to list all the available 
timezones and just copy paste to the command above: 
 

tzutil /l
 
source: https://technet.microsoft.com/en-us/library/hh825053.aspx 

Monday, August 28, 2017

PowerShell Script to send Notification on new dhcp lease

Here is a little powershell script i assembled from different sources to send me an email notifying when there is a new DHCP lease for a computer that is not in the AD




#step 1: get computer names from dhcp
Get-DhcpServerv4Lease -allleases -ScopeId "192.168.X.0" -ComputerName "DHCPSERVERNAME" |
 Select-Object @{expression= {$_.hostname}; label='name' } | export-CSV -notypeinformation C:\temp\dhcp\LeaseLog.csv

$leaselogpath = "c:\temp\DHCP\LeaseLog.csv"
$dhcplist = Import-csv -path $leaselogpath  | ForEach-Object -Process {$_.Name.Replace(".domain.local",$null)}

#step 2: get computer names from AD
import-module activedirectory

$adlist = (Get-ADComputer -filter *).name

#step 3: compare both above lists to find new computer name from dhcp

$comparedlist = (Compare-Object -ReferenceObject $adlist -DifferenceObject $dhcplist ).InputObject

#step 4: compare against static computers in the network

$staticlist = Get-Content C:\temp\DHCP\static.txt
$newdhcp = (Compare-Object $staticlist $comparedlist).InputObject


#step 5: send the email only when there is a new name from all the comparisons
           
if($newdhcp) {           
    #send email to sysadmin
$smtpserver = "SMTPADDRESS"
$from="dhcp@domain.local"
$to="admin@domain.local"
$subject="New Non-AD joined DHCP clients"
$body= "$newdhcp `n
If it is legit, add it to c:\temp\dhcp\static.txt list"
$mailer = new-object Net.Mail.SMTPclient($smtpserver)
$msg = new-object Net.Mail.MailMessage($from,$to,$subject,$body)
$msg.IsBodyHTML = $false
$mailer.send($msg)           
} else {           
               
}

Some issues with this script:
1. Need to maintain the static.txt list
2. Duplicate names in dhcp will be sent by email as a new lease
3. when computer in the AD is disconnected from the network and dhcp lease expires you will get notification as a new lease

Wednesday, August 2, 2017

Convert Vmware Vm to Hyper-v VM

First , Download and install Microsoft Virtual Machine Converter 3.0:
https://www.microsoft.com/en-us/download/details.aspx?id=42497

Next, if you still doesn't have VMware Converter , go ahead and download and install it too:
https://www.vmware.com/il/products/converter.html

After we have our VMware converter up un running , shutdown the Vm you want to convert, and open VMware converter, select convert machine, choose powered off together with VMware infrastructure virtual machine, enter you vcenter on esxi host address and ceredntials to connect, choose from the inventory your desired virtual machine and press next choose destination type as vmware workstation or other virtual machine and choose a shared drive to store the converted vmdk file.
at the next step review the virtual machine configuration and hit finish at the next screen to start the convert.

once finish you will the hard drives of the virtual machine as vmdk files, now we will use powershell to convert them to vhdx format to use in Hyper-v, open powershell as administrator and import the module from the Microsoft Virtual Machine Converter we downloaded in the first step:

Import-Module 'C:\Program Files\Microsoft Virtual Machine Converter\MvmcCmdlet.psd1'

now use this powershell command to begin the convert process:

ConvertTo-MvmcVirtualHardDisk -SourceLiteralPath d:\scratch\vmx\VM-disk1.vmdk -VhdType DynamicHardDisk -VhdFormat vhdx -destination c:\vm-disk1

once finish you will have a vhdx file you now need to copy to your hyper-v storage.
next just add the file as a disk to your hyper-v virtual machine and it should boot as it was in vmware.


another option (not tested by me) is to bypass the VMware converter step and just copy the vmdk file from the ESX datastore.
again i have not tried this.

Good Luck

Tuesday, April 8, 2014

Exchange Server 2010 Information Store Service not starting on Vmware VM


Exchange Server 2010 Information Store not starting

I Came across this issue when i tried to restart the Information Store Service.
All started when a backup exec jobs failed with an error and caused the VSS Microsoft Exchange Writer to be unstable and stack on "Retryable error".

A restart off the service and the server did not solve the problem till I found some logs and discussion about this problem on the web telling that there maybe a time difference between the vm and the vm host.

So I checked the vm host for time And found out that it is was just a couple minutes apart, so I have updated the time on the host and added NTP time server, restarted the exchange vm and all services started just fine:
 
 

 
 
 
The environment in question was vsphere 5.1 with 5.1 esxi host and a vm exchange 2010