Nhạc số - Phần cứng - Phần Mềm - Nội dung

Thanhvo31

Well-Known Member
Windows On RaspberryPi
https://www.worproject.ml/
https://rpi4-uefi.dev/alternate-guide-running-windows-10-on-the-pi-4/
https://web.telegram.org/#/im?p=@raspberrypiwoa
https://discourse.pi64.win/t/windows-10-on-pi-4-installation-guide/664

Hello! I’m @Comstepr#0001 on Discord and this is an installation guide on how to install Windows 10 on Pi 4.

Before you get started, ensure you backup all data on the storage device (whether it be SD card, USB device) you are doing this on. I am not responsible for anything that goes wrong by following my installation guide.

Firstly, join the Raspberry Pi Stuff Discord. It contains updated versions of things that you can download from this thread.
Discord join code: MKhZhqQ
However, you can continue with the tutorial without joining the Discord, however some downloads may be outdated.

Requirements:

  • SD card more than 8GB of space or a USB hard drive
  • Download the ISO from the Discord mentioned above or use this:
https://drive.google.com/file/d/1f-2Jb0DxsbLIFEezdwfEQthEG_UPXseq/view?usp=sharing 199

  • Download the 3GB RAM fix
https://cdn.discordapp.com/attachments/711408939333320735/731678807987060796/3_Gig_Fix.rar 181

  • Latest pre-release of WoR
www.worproject.ml/downloads

If you are in the Discord, download updated ISOs in #downloads

To use a USB hard drive, check a YouTube tutorial from ExplainingComputers. Skip to 12:57

youtu.be/2zrwjGcyM5s?t=777

  1. Extract the WoR Zip file.
  2. Open WoR.exe as an administrator.
  3. Select your language and press Next.
  4. Select “Raspberry Pi 4” and select the SD card/USB hard drive you are doing this on. Ensure that you have backed up everything on the storage device as it will be deleted.
  5. Select the ISO you downloaded. Click Next.
  6. For drivers, use the latest package available on the server. Click Next.
  7. Extract 3_Gig_Fix.rar.
  8. Go back to WoR and select “Use a firmware stored…”. Select “Optional Uefi Firmware 1.17 Dev Beta.zip” and click Next.
  9. I’ll be using MBR and Windows Imaging.
  10. Click Next.
  11. Ensure that you have configured everything correctly. Click Install once confirmed.
  12. Once WoR has completed the installation, open a Command Prompt as administrator.
  13. Navigate to 3_Gig_Fix folder as it contains winpatch.exe.
  14. Use this command but replace F:\ with the drive letter of the Pi 4’s Windows partition.
  15. winpatch F:\Windows\System32\drivers\USBXHCI.SYS 910063E8370000EA 910063E8360000EA 3700010AD5033F9F 3600010AD5033F9F

  16. Use this command but replace Y:\ with the drive letter of the Pi 4’s Boot partition.
  17. bcdedit /store Y:\EFI\Microsoft\Boot\BCD /set {default} truncatememory 3221225472

  18. Once completed all steps, safely remove the USB drive/SD card (by ejecting it on Windows).
  19. Attach the USB drive or SD card into your Pi 4.
  20. Attach all the peripherals and USB devices you will be using on the Raspberry Pi 4. As soon as the Pi 4 boots up, you should not plug in/unplug USB devices.
  21. Turn on the Pi 4 and continue with the installation process.
Some things to note
Do not use a USB-C adapter

Credits to System64#6166 on Discord for assisting a lot with this.

Thank you!
 

Thanhvo31

Well-Known Member
https://plextips.plexed.co.uk/rclone/rclone-on-synology/

Synology
DISCLAIMER

Any information provided here is provided as is.

There are no guarantees that it will work for you and I will not be held responsible for any damage or data loss.

Using anything from here is at your own risk.

So if you still wish to continue here is what (I think) you need to know to set up an rclone mount on a Synology NAS

If you need help you can find me on the plex forum or rclone forum @blim5001

Before you start, you will need know a little bit as information here WILL require using the command line, so you should be familiar with connecting to your NAS via SSH.
If that is not something you feel comfortable with, then I suggest you do not carry on.

In order to mount remote filesystems you will need the FUSE Libraries. These are not available in the Synology OS by default, so you need enable synocommunity package repository in the DSM Package manager to allow you install an additional package which will provide these libraries. For more info on that, see: https://synocommunity.com/ The package you need to install from the synocommunity repository is SSHFS

UPDATE (2020):

It seems there are now some new packages available in the syno community repo that seem to be available for more architectures. There are a few packages under the SynoCli grouping. I think the only one you should need is SynoCli Disk Tools .

I am going to give it a try and see if it works. (Hope it does as SSHFS is no longer available). Unmounted my drive, removed the SSHFS package and installed SynoCli Disk Tools and remounted. All seems good so far.

You should not really use the root user for this. It should all work as your main NAS Admin account (at least it does in my setup)

On my NAS I have homes setup (this means your home directory should be accessible from the other machines on the network) and so for this guide we will say I have:

  • A user account: admin
  • A folder on my nas at this location /volume1/homes/admin
  • I connect to the NAS via ssh using this admin account
  • An rclone config to my Google Drive, which in this case is gDrive:
[Note: You do not need to have ‘homes’ enabled, but if not then I would create a folder on your raid volume to store the programs and scripts]

If you are on a Mac then you can use the Terminal program from your Utilites folder.
For Windows users (Which I am not) I guess it’s Putty for you guys and girls

I am not a fan of the default Shell that is set when you login via SSH.

So the first command to run after connecting via SSH is this:

exec /bin/bash
That will put you into a bash shell (Which may prevent issues where you need to escape command line flags, you will see what I mean by these as we go on.)

So you should now have installed the SSHFS package and be connected to your NAS via SSH

Now let’s install rclone with the following command:

curl https://rclone.org/install.sh | sudo bash
You will probably need to enter your password to complete the install

At which point it will install rclone to the /usr/bin directory
Don’t worry about the:
bash: line 153: mandb: command not found warning
You can safely ignore this.

To check it is installed there run this command:

/usr/bin/rclone -V
If it is installed at that location, you should see something like:

rclone v1.43
- os/arch: linux/amd64
- go version: go1.11
Then try just

rclone -V
If that gives the same result, then fine, your system knows where rclone is, if not, then exit from your SSH session and re login (I have found that sometimes the path does not get updated until you logout and log back in)

Now you need to configure rclone to connect to your google drive.
There are detailed instructions here https://rclone.org/drive/, so really don’t think I need to repeat all that here.

Once you have done that you should have a working rclone connection to your google drive. Test this with

rclone lsd gDrive:
Replace gDrive with whatever you called your remote when you did your rclone config.

This should list all the Directories in the top level of your google drive.

All we need to do now is create a mount point on yours NAS on which to mount your google drive

For this example I am going to mount it to a folder in my home directory.

Create a folder, I would do this through SSH to prevent the DSM adding the annoying

@eaDir
Directory

mkdir /volume1/homes/admin/googledrive

Next mount the drive

/usr/bin/rclone mount -v gDrive: /volume1/homes/admin/googledrive --allow-other --dir-cache-time 672h --vfs-cache-max-age 675h --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G --buffer-size 32M &
This should mount the drive and put the mount process into the background.

If you did not get any errors then run:

ls /volume1/homes/admin/googledrive
This should list the files and folders in the top level of your google drive

If you get an error “Command mount needs 2 arguments maximum” when you run the mount command, you will need to make sure the drive is not mounted before continuing.
(This is what I meant earlier about escaping command line flags, not sure why you need to this as I do not need to… but anyways)

So run this command to unmount the drive:

fusermount -uz /volume1/homes/admin/googledrive
and again if this throws an error, then use this instead

fusermount \-uz /volume1/homes/admin/googledrive
After running the fusermount command, run this command to see/check if your drive is mounted:

ls /volume1/homes/admin/googledrive
If it is empty then your drive is not mounted and you can now try a slightly different mount command:
(We have had to escape all the command line flags by putting a backslash in front of them.)

/usr/bin/rclone mount \-v gDrive: /volume1/homes/admin/googledrive \--allow-other \--dir-cache-time 672h \--vfs-cache-max-age 675h \--vfs-read-chunk-size 64M \--vfs-read-chunk-size-limit 1G \--buffer-size 32M &
If there are no errors this time, then run:

ls /volume1/homes/admin/googledrive
You should see a list of all the files in your Google Drive.

If yes, then success, it’s working. Your google drive is mounted on your NAS

You can now go and create your libraries in Plex and point them to this mount point.

Next we will come onto Mounting when you reboot the nas and periodically checking the mounts and remounting if necessary

tbc…

Mount khi khởi động NAS như sau:

Tạo 1 file bash có nội dung như sau:

Mã:

Mã:
#!/bin/ash

/usr/bin/rclone --config /var/services/homes/admin/.config/rclone/rclone.conf mount -v YOURDRIVE : /volume1/homes/admin/YOUR MOUNT POINT --allow-other --dir-cache-time 672h --vfs-cache-max-age 675h --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G --buffer-size 32M &

Đặt tên là 'mountGD.sh' chẳng hạn
Chép file vào chỗ nào dễ tìm, tôi chép vào /volume1/homes/admin/mountGD.sh

SSH vào NAS (admin/yourpass)
Cấp quyền cho file
$ sudo chmod +x /volume1/homes/admin/mountGD.sh

Từ đây chạy script thử xem mount ổ thành công chưa.

Thoát khỏi shell

Vào giao diện web của NAS

Truy cập Control Panel > Task Scheduler > Nút Create > Trigger Task > User defined script >
Thẻ General Setting
Task name : Tùy chọn, VD MOUNT_GG
User : admin
Event : Boot-up
Sang thẻ Task setting
Hộp User defined script
bash /volume1/homes/admin/mountGD.sh

Chọn OK.

Thử restart lại NAS, lúc này GG sẽ lên tại điểm MOUNT POINT ngay.

ENJOY !!!

PS: chú ý là phải khai thêm tham số
--config /var/services/homes/admin/.config/rclone/rclone.conf

Phần chữ đỏ lấy từ $ rclone config file

Với vụ Rclone - Gsuite - NAS - Rooncore kết hợp trên 1 PC này là có bộ phát nhạc khá ngon mà không tốn nhiều xèng mua HDD (đã test chạy ngọt ngào tới 24/96kHz)
Test tiếp DSD và báo cáo a
 

Thanhvo31

Well-Known Member
https://www.runeaudio.com/forum/runeaudio-r-e5-t7105.html

RuneAudio+R e5

For all Raspberry Pi: Zero, 1, 2, 3 and 4

A new e of RuneAudio+R

  • - Upgrade kernel and all packages to latest versions
    - Faster

Image files:
Mirrors - Europe: (Courtesy of Andy)
RPi 4: RuneAudio+R_e5-RPi4.img.xz
RPi 3 and 2: RuneAudio+R_e5-RPi2-3.img.xz
RPi 1 and Zero: RuneAudio+R_e5-RPi0-1.img.xz

Mirrors - Asia
RPi 4: RuneAudio+R_e5-RPi4.img.xz
RPi 3 and 2: RuneAudio+R_e5-RPi2-3.img.xz
RPi 1 and Zero: RuneAudio+R_e5-RPi0-1.img.xz

DIY: RuneOS

guide.gif



How-to:

- Windows: Download and decompress to RuneAudio+R_e4-RPi*.img with 7-zip, WinRAR or WinZip and write the file to a micro SD card, 4GB or more, with Win32 Disk Imager
- Linux or Windows: Write from URL to a micro SD card directly with Etcher

- Existing users:

  • - Keep or Backup current setup SD card.
    - Try with a spare one before moving forward.
- Before power on:

  • - Wi-Fi pre-configure - 3 alternatives: (Only if no wired LAN available.)

    • 1. From existing

      • - Copy an existing profile file from /etc/netctl .
        - Rename it to wifi then copy it to BOOT before power on.
      2. Edit template file - name and password

      • - Rename wifi0 in BOOT to wifi .
        - Edit SSID and Key.
      3. Generate a complex profile - static IP, hidden SSID

- If connected to a screen, IP address and QR code for connecting from remote devices displayed.
- Before setup anything, Settings > Addons > RuneAudio+R e5 > Update (if available)
- Restore settings and database: Settings > System > Backup/Restore Settings (if there is one)
- Music Library database - USB drive automatically run if already plugged in otherwise Settings > update Library (icon next to MPD)

Not working?
- Power off and wait a few seconds then power on
- If not connected, temporarily connect wired LAN then remove after Wi-Fi setup successfully.
- Still no - Download the image file and start over again


RuneAudio+R - An improved version of RuneAudio
- Based on features from RuneUI Enhancement
- Complete frontend redesigned
- Complete backend rebuilt with latest Arch Linux Arm
- System-wide upgraded to the latest kernel and packages
- Improved performance and response
- Metadata Tag Editor - *.cue support
- Album mode with coverarts
- File mode with thumbnail icons
- Coverarts and bookmarks - add, replace and remove
- Support WebRadio and UPnP coverarts - online fetched
- Support *.jpg, *.png and animated *.gif
- Support *.cue - virtually as individual tracks in all modes and user playlists
- Support *.wav album artists and sort tracks
- Renderers / Clients and Streamers (with metadata and coverarts)

  • - AirPlay
    - Spotify Connect (Premium account only)
    - UPnP
    - Snapcast with metadata and coverarts
    - simple HTTP


Tips
- Best sound quality:

  • - Settings > MPD > Bit-perfect - Enable
    - Use only amplifier volume (Unless quality of DAC hardware volume is better.)
- Disable features if not use to lower CPU usage:

  • Settings > System > Features
- Coverart as large playback control buttons

  • - Tap top of coverart to see controls guide.
- Hide top and bottom bars

  • - No needs for top and bottom bars
    - Use coverart controls instead of top bar buttons
    - Swipe to switch between pages
    <- Library <-> Playback <-> Playlist ->
- Drag to arrange order

  • - Library home blocks
    - Playlist tracks
    - Saved playlist tracks
- Some coverarts missing from album directories

  • - Subdirectories listed after partial Library database update from context menu.
    - Subdirectories - context menu > Exclude directory
- Some music files missing from library

  • - Settings > MPD > question mark icon -scroll- FFmpeg Decoder
    - Enable if filetypes list contains ones of the missing files.
- CUE sheet

  • - *.cue filenames must be identical to each coresponding music file.
    - Can be directly edited by Tag Editor.
- Minimum permission for music files (on Linux ext filesystem)

  • - Directories: rwxr-xr-x (755)
    - Files: rw-r--r-- (644)
- RPi to router connection:

  • - With wired LAN if possible - Disable Wi-Fi
    - With WiFi if necessary
    - With RPi accesspoint only if there's no router
- Connect to RuneAudio with IP address instead of runeaudio.local

  • - Get IP address: Menu > Network > Network Interfaces list
    - Setup static IP address

Static IP address
Always setup at the router unless there is a good reason not to.

  • Set at each device:

    • - IP addresses have to be in the same range of the router.
      - IP addresses must not be duplicate of existing ones.
      - IP address must be reconfigured on every OS reinstallation.
      - A log is needed to manually update all assigned IP address-device data.
    Set at the router:

    • - The router only allows reserved IP addresses in the same range.
      - Reserved IP addresses are verified not to duplicate.
      - The device always get the same IP address on every OS reinstallation without reconfigure.
      - The router always keep the update log of all IP address-device data.


rern
 

Thanhvo31

Well-Known Member
Leech Fshare >> Google Drive
Cần:
1) TK Fshare VIP (https://www.fshare.vn/payment/service)
2) TK Google Drive Unlimited (https://www.ebay.com/itm/303591182230)
3) VPS - máy chủ ảo (https://www.hostinger.vn/huong-dan/vps-la-gi-tat-ca-cac-dieu-can-biet-ve-may-chu-ao/).
Mình dùng thuê bao Internet Viettel (200K/ tháng- 100Mbps) + 1 Tinkerboard S (eMMc 16GB, thích hơn Pi xài mSD, hoặc xài SSD gắn vô Pi qua USB3.0 cũng được)
1SBC (Pi, Tinkerboard, Orange Pi,...) + cài bản server +


Làm theo hướng dẫn sau đây
https://github.com/duythongle/fshare2gdrive.
share2gdrive
NodeJS script for direct uploading from FShare.vn to Google Drive without storing files locally.

For deprecated bash script (download.sh and login.sh), please see here.

Features
  • Pipe upload to GDrive without storing file locally. No huge storage needed! (thanks to RClone rcat feature)

  • Download whole FShare folder recursively with folder path preserved

  • Download in parallel (NOT recommended) and Resumable (thanks to GNU Parallel --resume)
Dependencies
  1. RClone
# Install RClone
curl -s https://rclone.org/install.sh | sudo bash

# Login GDrive for RClone.
rclone config

Please see RClone official documents support for Google Drive for more details.

  1. NodeJS 10+, GNU Parallel and curl
# Install dependencies on Ubuntu
sudo apt-get update && \
sudo apt-get install parallel curl -y && \
curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash && \
sudo apt install -y nodejs
Usage
This script is recommended to run on an unlimited bandwidth VPS or it will be getting costly over time

  1. Login fshare
# Login FShare
curl -sS https://raw.githubusercontent.com/duythongle/fshare2gdrive/master/fshare2gdrive.js | \
tail -n+2 | node - login "your_fshare_email" "your_fshare_password"

You only need to login once. Login credentials will save to $HOME/.creds in PLAIN TEXT for later use. So use with caution!

  1. Download single FShare FILE to GDrive
curl -sS https://raw.githubusercontent.com/duythongle/fshare2gdrive/master/fshare2gdrive.js | \
tail -n+2 | node - "<fshare_file_url>" "<rclone_remote_name>" "<remote_folder_path>" | bash -s

<fshare_file_url>: your fshare file link.

<rclone_remote_name>: your rclone remote name that you have configured in step 1

<remote_folder_path>: your remote folder path you want to upload to.

Don't forget to double quote your parameters

E.g:

# the command below will download "https://www.fshare.vn/file/XXXXXXXXXXX"
# and pipe upload to "rclone rcat gdrive-remote:/RClone Upload/"
curl -sS https://raw.githubusercontent.com/duythongle/fshare2gdrive/master/fshare2gdrive.js | \
tail -n+2 | node - "https://www.fshare.vn/file/XXXXXXXXXXX" "gdrive-remote" "/RClone Upload/"
  1. Download whole FShare FOLDER to GDrive SYNCHRONOUSLY (one by one file) RECOMMENDED way
# Generate single file download commands list for later use to a file "/path/to/temp/commands_list"
curl -sS https://raw.githubusercontent.com/duythongle/fshare2gdrive/master/fshare2gdrive.js | \
tail -n+2 | node - "<fshare_folder_url>" "<rclone_remote_name>" "<remote_folder_path>" | bash -s

<fshare_folder_url>: your fshare file link.

<rclone_remote_name>: your rclone remote name that you have configured in step 1

<remote_folder_path>: your remote folder path you want to upload to.

E.g:

# Generate single file download commands list and run one by one
curl -sS https://raw.githubusercontent.com/duythongle/fshare2gdrive/master/fshare2gdrive.js | \
tail -n+2 | node - \
"https://www.fshare.vn/folder/XXXXXXXXXXX" "gdrive-remote" "/RClone Upload/" | bash -s

You can make use of GNU Parallel to download in multiple simultaneous jobs as example below NOT recommended way!!!

# Generate single file download commands list for later use to a file "/tmp/commands_list"
curl -sS https://raw.githubusercontent.com/duythongle/fshare2gdrive/master/fshare2gdrive.js | \
tail -n+2 | node - "https://www.fshare.vn/folder/XXXXXXXXXXX" "gdrive-remote" "/RClone Upload/" \
> /tmp/commands_list

# Start running all commands list to download in parallel with resumable
# download jobs will run in 2 simultaneous jobs with "-j 2"
parallel -j 2 --bar --resume --joblog /tmp/fshare2gdrive.joblogs < /tmp/commands_list

Use parallel download "parallel -j 2" or greater ONLY when you make sure all folders included subfolders are existed in remote folder path or rclone will create duplicated folders! If you keep getting ssh timeout issue, please make use of Tmux or ssh config file


Chúc thành công!
 

Thanhvo31

Well-Known Member
Cách lấy mật khẩu root bằng các bản live distro
****************************************
Trong khi vọc vạch các bản phân phối khóa hoặc không cung cấp mật khẩu root, chẳng lẽ lại bó tay.
Hôm nay mình mới tìm được cách lấy MK root cho mấy món đó như Eu*-phony, R0o*n, Volumy0.
Mình đã làm thành công trên Eu*-phony. Hy vọng là có cách làm tương tự với các chú còn lại.
Có thời gian mình sẽ thử và xác nhận.
Đầu tiên qua các trang này
http://www.microhowto.info/howto/reset_a_forgotten_root_password_using_a_live_distribution.html.
Distrowatch.com
Tìm hiểu xem bản mình muốn reset bản phân phối gốc của nó là bản nào và tìm bản Live Distro tương ứng
Eu*-phony = Archlinux x64
Volumio = Debian (x64 hoặc arm32, 64)
R0onRock = dannyDanny DulaiRoon Labs: COO
Feb '17
@TopQuark – There is no Linux distribution it is based on. The whole OS is built from scratch. Starting from cross compilers, the whole thing is custom built.
== nhưng thôi kệ nó, Linux nào chắc cũng xử nó được

Trường hợp Eu*-phony mình tải bản Manjaro Linux về.
Làm theo reset_a_forgotten_root_password_using_a_live_distribution

Objective
To reset the root password of a machine when it has been forgotten.

(This method is also applicable where the machine is administered from some other account using sudo, as is the default on Ubuntu.)

Scenario
You are unable to log into the root account of a machine because you have forgotten the password. The machine has one hard drive with the following partitions:

  • The root partition is /dev/sda2;
  • /usr is /dev/sda5;
  • /var is /dev/sda6; and
  • /home is /dev/sda7.
Method
Overview
In order to reset the password you need to mount the root filing system of the machine to be recovered, but without booting the operating system on that partition. A convenient way to do this is by means of a live GNU/Linux distribution: one that can be booted from a removable medium without being installed on the machine. It will need to:

  • be on a medium that the machine has the ability to boot from;
  • be sufficiently compatible with the hardware to at least provide a text console and the ability to mount filing systems (including ones located on RAID devices or LVM volumes if applicable);
  • be able to run binaries from the machine to be recovered.
A current version of Ubuntu or Knoppix will suffice for most purposes, but for specialised requirements you may need to look further afield (or even build your own). It is possible to recover a 32-bit (i386) system with a 64-bit (amd64) distribution, but not vice versa.

Boot into the live distribution
In order to boot into the live distribution you may need to reconfigure the BIOS to ensure that the machine boots from the relevant removable device in preference to the hard drive. Remember to revert any such changes when you have finished.

Mount the root partition
Mount the root partition of the system to be recovered:

mkdir /mnt/recover
mount /dev/sda2 /mnt/recover

It should not be necessary to mount any other partition unless you have an unusual configuration. Note that the live distribution will not necessarily assign the same device name to each hard drive as the system being recovered (but it should assign the same partition numbers).

chroot into the root partition
The chroot command allows you to move the filesystem root to some subdirectory of the current root. In this case you want to move it to /mnt/recover:

chroot /mnt/recover

This effectively makes you the root user of the system to be recovered. For example, the file that was /mnt/recover/etc/passwd now appears as /etc/passwd. Any commands you execute will use binaries from the hard drive, not the live distribution.

Change the root password
As the root user of the system to be recovered you should now be able to change the root password in the normal manner:

passwd

The passwords for other local accounts can be changed similarly:

passwd user

Because you are root, it should not be necessary to enter the previous password.

Note that passwords provided by a remote authentication protocol such as Kerberos or LDAP cannot be reset using this method.

Exit from the chroot
You can exit from the chroot shell in the same way as any other shell, for example using the exit command:

exit

or by pressing control-D.

Unmount the root partition
umount /mnt/recover

Variations
Directly editing the password file
It is possible to achieve the same effect by directly editing the password file. This is significantly more risky than using the passwd command, but may prove useful if you can edit files but are unable to execute binaries.

The file you need to edit is /etc/passwd. Each line is a colon-separated list of fields, the first two of which are the username and password for an account. Here is a sample:

root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
bin:x:2:2:bin:/bin:/bin/sh
sys:x:3:3:sys:/dev:/bin/sh

In each of these four entries of this example the password field is set to ‘x’, meaning that the encrypted password can be found in /etc/shadow. If you replace the ‘x’ (or whatever else is in the second field) with the empty string then no password will be needed:

root::0:0:root:/root:/bin/bash

It would be prudent to make a backup of /etc/passwd before making any changes, because the mapping between usernames and UIDs would be very tedious to reconstruct if it were lost. You should also consider isolating the machine from any networks while it is without a root password, as it will obviously be very insecure during this period.

The ‘x’ should be re-inserted before setting a new root password, otherwise it will be stored in /etc/passwd instead of /etc/shadow.

-- chúc các bác thành công--
 

thanhvo35

Active Member
TUTORIAL
How To Use Systemctl to Manage Systemd Services and Units
System Tools

  • By Justin Ellingwood

    Last Validated onNovember 11, 2020 Originally Published onFebruary 1, 2015 3.3mviews
Introduction
systemd is an init system and system manager that has widely become the new standard for Linux distributions. Due to its heavy adoption, familiarizing yourself with systemd is well worth the trouble, as it will make administering servers considerably easier. Learning about and utilizing the tools and daemons that comprise systemd will help you better appreciate the power, flexibility, and capabilities it provides, or at least help you to do your job with minimal hassle.

In this guide, we will be discussing the systemctl command, which is the central management tool for controlling the init system. We will cover how to manage services, check statuses, change system states, and work with the configuration files.

Please note that although systemd has become the default init system for many Linux distributions, it isn’t implemented universally across all distros. As you go through this tutorial, if your terminal outputs the error bash: systemctl is not installed then it is likely that your machine has a different init system installed.

Service Management
The fundamental purpose of an init system is to initialize the components that must be started after the Linux kernel is booted (traditionally known as “userland” components). The init system is also used to manage services and daemons for the server at any point while the system is running. With that in mind, we will start with some basic service management operations.

In systemd, the target of most actions are “units”, which are resources that systemd knows how to manage. Units are categorized by the type of resource they represent and they are defined with files known as unit files. The type of each unit can be inferred from the suffix on the end of the file.

For service management tasks, the target unit will be service units, which have unit files with a suffix of .service. However, for most service management commands, you can actually leave off the .service suffix, as systemd is smart enough to know that you probably want to operate on a service when using service management commands.

Starting and Stopping Services
To start a systemd service, executing instructions in the service’s unit file, use the start command. If you are running as a non-root user, you will have to use sudo since this will affect the state of the operating system:

  • sudo systemctl start application.service
As we mentioned above, systemd knows to look for *.service files for service management commands, so the command could just as easily be typed like this:

  • sudo systemctl start application
Although you may use the above format for general administration, for clarity, we will use the .service suffix for the remainder of the commands, to be explicit about the target we are operating on.

To stop a currently running service, you can use the stop command instead:

  • sudo systemctl stop application.service
Restarting and Reloading
To restart a running service, you can use the restart command:

  • sudo systemctl restart application.service
If the application in question is able to reload its configuration files (without restarting), you can issue the reload command to initiate that process:

  • sudo systemctl reload application.service
If you are unsure whether the service has the functionality to reload its configuration, you can issue the reload-or-restart command. This will reload the configuration in-place if available. Otherwise, it will restart the service so the new configuration is picked up:

  • sudo systemctl reload-or-restart application.service
Enabling and Disabling Services
The above commands are useful for starting or stopping services during the current session. To tell systemd to start services automatically at boot, you must enable them.

To start a service at boot, use the enable command:

  • sudo systemctl enable application.service
This will create a symbolic link from the system’s copy of the service file (usually in /lib/systemd/system or /etc/systemd/system) into the location on disk where systemd looks for autostart files (usually /etc/systemd/system/some_target.target.wants. We will go over what a target is later in this guide).

To disable the service from starting automatically, you can type:

  • sudo systemctl disable application.service
This will remove the symbolic link that indicated that the service should be started automatically.

Keep in mind that enabling a service does not start it in the current session. If you wish to start the service and also enable it at boot, you will have to issue both the start and enable commands.

Checking the Status of Services
To check the status of a service on your system, you can use the status command:

  • systemctl status application.service
This will provide you with the service state, the cgroup hierarchy, and the first few log lines.

For instance, when checking the status of an Nginx server, you may see output like this:

Output
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2015-01-27 19:41:23 EST; 22h ago
Main PID: 495 (nginx)
CGroup: /system.slice/nginx.service
├─495 nginx: master process /usr/bin/nginx -g pid /run/nginx.pid; error_log stderr;
└─496 nginx: worker process
Jan 27 19:41:23 desktop systemd[1]: Starting A high performance web server and a reverse proxy server...
Jan 27 19:41:23 desktop systemd[1]: Started A high performance web server and a reverse proxy server.

This gives you a nice overview of the current status of the application, notifying you of any problems and any actions that may be required.

There are also methods for checking for specific states. For instance, to check to see if a unit is currently active (running), you can use the is-active command:

  • systemctl is-active application.service
This will return the current unit state, which is usually active or inactive. The exit code will be “0” if it is active, making the result simpler to parse in shell scripts.

To see if the unit is enabled, you can use the is-enabled command:

  • systemctl is-enabled application.service
This will output whether the service is enabled or disabled and will again set the exit code to “0” or “1” depending on the answer to the command question.

A third check is whether the unit is in a failed state. This indicates that there was a problem starting the unit in question:

  • systemctl is-failed application.service
This will return active if it is running properly or failed if an error occurred. If the unit was intentionally stopped, it may return unknown or inactive. An exit status of “0” indicates that a failure occurred and an exit status of “1” indicates any other status.

System State Overview
The commands so far have been useful for managing single services, but they are not very helpful for exploring the current state of the system. There are a number of systemctl commands that provide this information.

Listing Current Units
To see a list of all of the active units that systemd knows about, we can use the list-units command:

  • systemctl list-units
This will show you a list of all of the units that systemd currently has active on the system. The output will look something like this:

Output
UNIT LOAD ACTIVE SUB DESCRIPTION
atd.service loaded active running ATD daemon
avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack
dbus.service loaded active running D-Bus System Message Bus
dcron.service loaded active running Periodic Command Scheduler
dkms.service loaded active exited Dynamic Kernel Modules System
[email protected] loaded active running Getty on tty1
. . .

The output has the following columns:

  • UNIT: The systemd unit name
  • LOAD: Whether the unit’s configuration has been parsed by systemd. The configuration of loaded units is kept in memory.
  • ACTIVE: A summary state about whether the unit is active. This is usually a fairly basic way to tell if the unit has started successfully or not.
  • SUB: This is a lower-level state that indicates more detailed information about the unit. This often varies by unit type, state, and the actual method in which the unit runs.
  • DESCRIPTION: A short textual description of what the unit is/does.
Since the list-units command shows only active units by default, all of the entries above will show loaded in the LOAD column and active in the ACTIVE column. This display is actually the default behavior of systemctl when called without additional commands, so you will see the same thing if you call systemctl with no arguments:

  • systemctl
We can tell systemctl to output different information by adding additional flags. For instance, to see all of the units that systemd has loaded (or attempted to load), regardless of whether they are currently active, you can use the --all flag, like this:

  • systemctl list-units --all
This will show any unit that systemd loaded or attempted to load, regardless of its current state on the system. Some units become inactive after running, and some units that systemd attempted to load may have not been found on disk.

You can use other flags to filter these results. For example, we can use the --state= flag to indicate the LOAD, ACTIVE, or SUB states that we wish to see. You will have to keep the --all flag so that systemctl allows non-active units to be displayed:

  • systemctl list-units --all --state=inactive
Another common filter is the --type= filter. We can tell systemctl to only display units of the type we are interested in. For example, to see only active service units, we can use:

  • systemctl list-units --type=service
Listing All Unit Files
The list-units command only displays units that systemd has attempted to parse and load into memory. Since systemd will only read units that it thinks it needs, this will not necessarily include all of the available units on the system. To see every available unit file within the systemd paths, including those that systemd has not attempted to load, you can use the list-unit-files command instead:

  • systemctl list-unit-files
Units are representations of resources that systemd knows about. Since systemd has not necessarily read all of the unit definitions in this view, it only presents information about the files themselves. The output has two columns: the unit file and the state.

Output
UNIT FILE STATE
proc-sys-fs-binfmt_misc.automount static
dev-hugepages.mount static
dev-mqueue.mount static
proc-fs-nfsd.mount static
proc-sys-fs-binfmt_misc.mount static
sys-fs-fuse-connections.mount static
sys-kernel-config.mount static
sys-kernel-debug.mount static
tmp.mount static
var-lib-nfs-rpc_pipefs.mount static
org.cups.cupsd.path enabled
. . .

The state will usually be enabled, disabled, static, or masked. In this context, static means that the unit file does not contain an install section, which is used to enable a unit. As such, these units cannot be enabled. Usually, this means that the unit performs a one-off action or is used only as a dependency of another unit and should not be run by itself.

We will cover what masked means momentarily.

Unit Management
So far, we have been working with services and displaying information about the unit and unit files that systemd knows about. However, we can find out more specific information about units using some additional commands.

Displaying a Unit File
To display the unit file that systemd has loaded into its system, you can use the cat command (this was added in systemd version 209). For instance, to see the unit file of the atd scheduling daemon, we could type:

  • systemctl cat atd.service
Output
[Unit]
Description=ATD daemon
[Service]
Type=forking
ExecStart=/usr/bin/atd
[Install]
WantedBy=multi-user.target

The output is the unit file as known to the currently running systemd process. This can be important if you have modified unit files recently or if you are overriding certain options in a unit file fragment (we will cover this later).

Displaying Dependencies
To see a unit’s dependency tree, you can use the list-dependencies command:

  • systemctl list-dependencies sshd.service
This will display a hierarchy mapping the dependencies that must be dealt with in order to start the unit in question. Dependencies, in this context, include those units that are either required by or wanted by the units above it.

Output
sshd.service
├─system.slice
└─basic.target
├─microcode.service
├─rhel-autorelabel-mark.service
├─rhel-autorelabel.service
├─rhel-configure.service
├─rhel-dmesg.service
├─rhel-loadmodules.service
├─paths.target
├─slices.target
. . .

The recursive dependencies are only displayed for .target units, which indicate system states. To recursively list all dependencies, include the --all flag.

To show reverse dependencies (units that depend on the specified unit), you can add the --reverse flag to the command. Other flags that are useful are the --before and --after flags, which can be used to show units that depend on the specified unit starting before and after themselves, respectively.

Checking Unit Properties
To see the low-level properties of a unit, you can use the show command. This will display a list of properties that are set for the specified unit using a key=value format:

  • systemctl show sshd.service
Output
Id=sshd.service
Names=sshd.service
Requires=basic.target
Wants=system.slice
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=syslog.target network.target auditd.service systemd-journald.socket basic.target system.slice
Description=OpenSSH server daemon
. . .

If you want to display a single property, you can pass the -p flag with the property name. For instance, to see the conflicts that the sshd.service unit has, you can type:

  • systemctl show sshd.service -p Conflicts
Output
Conflicts=shutdown.target

Masking and Unmasking Units
We saw in the service management section how to stop or disable a service, but systemd also has the ability to mark a unit as completely unstartable, automatically or manually, by linking it to /dev/null. This is called masking the unit, and is possible with the mask command:

  • sudo systemctl mask nginx.service
This will prevent the Nginx service from being started, automatically or manually, for as long as it is masked.

If you check the list-unit-files, you will see the service is now listed as masked:

  • systemctl list-unit-files
Output
. . .
kmod-static-nodes.service static
ldconfig.service static
mandb.service static
messagebus.service static
nginx.service masked
quotaon.service static
rc-local.service static
rdisc.service disabled
rescue.service static
. . .

If you attempt to start the service, you will see a message like this:

  • sudo systemctl start nginx.service
Output
Failed to start nginx.service: Unit nginx.service is masked.

To unmask a unit, making it available for use again, use the unmask command:

  • sudo systemctl unmask nginx.service
This will return the unit to its previous state, allowing it to be started or enabled.

Editing Unit Files
While the specific format for unit files is outside of the scope of this tutorial, systemctl provides built-in mechanisms for editing and modifying unit files if you need to make adjustments. This functionality was added in systemd version 218.

The edit command, by default, will open a unit file snippet for the unit in question:

  • sudo systemctl edit nginx.service
This will be a blank file that can be used to override or add directives to the unit definition. A directory will be created within the /etc/systemd/system directory which contains the name of the unit with .d appended. For instance, for the nginx.service, a directory called nginx.service.d will be created.

Within this directory, a snippet will be created called override.conf. When the unit is loaded, systemd will, in memory, merge the override snippet with the full unit file. The snippet’s directives will take precedence over those found in the original unit file.

If you wish to edit the full unit file instead of creating a snippet, you can pass the --full flag:

  • sudo systemctl edit --full nginx.service
This will load the current unit file into the editor, where it can be modified. When the editor exits, the changed file will be written to /etc/systemd/system, which will take precedence over the system’s unit definition (usually found somewhere in /lib/systemd/system).

To remove any additions you have made, either delete the unit’s .d configuration directory or the modified service file from /etc/systemd/system. For instance, to remove a snippet, we could type:

  • sudo rm -r /etc/systemd/system/nginx.service.d
To remove a full modified unit file, we would type:

  • sudo rm /etc/systemd/system/nginx.service
After deleting the file or directory, you should reload the systemd process so that it no longer attempts to reference these files and reverts back to using the system copies. You can do this by typing:

  • sudo systemctl daemon-reload
Adjusting the System State (Runlevel) with Targets
Targets are special unit files that describe a system state or synchronization point. Like other units, the files that define targets can be identified by their suffix, which in this case is .target. Targets do not do much themselves, but are instead used to group other units together.

This can be used in order to bring the system to certain states, much like other init systems use runlevels. They are used as a reference for when certain functions are available, allowing you to specify the desired state instead of the individual units needed to produce that state.

For instance, there is a swap.target that is used to indicate that swap is ready for use. Units that are part of this process can sync with this target by indicating in their configuration that they are WantedBy= or RequiredBy= the swap.target. Units that require swap to be available can specify this condition using the Wants=, Requires=, and After= specifications to indicate the nature of their relationship.

Getting and Setting the Default Target
The systemd process has a default target that it uses when booting the system. Satisfying the cascade of dependencies from that single target will bring the system into the desired state. To find the default target for your system, type:

  • systemctl get-default
Output
multi-user.target

If you wish to set a different default target, you can use the set-default. For instance, if you have a graphical desktop installed and you wish for the system to boot into that by default, you can change your default target accordingly:

  • sudo systemctl set-default graphical.target
Listing Available Targets
You can get a list of the available targets on your system by typing:

  • systemctl list-unit-files --type=target
Unlike runlevels, multiple targets can be active at one time. An active target indicates that systemd has attempted to start all of the units tied to the target and has not tried to tear them down again. To see all of the active targets, type:

  • systemctl list-units --type=target
Isolating Targets
It is possible to start all of the units associated with a target and stop all units that are not part of the dependency tree. The command that we need to do this is called, appropriately, isolate. This is similar to changing the runlevel in other init systems.

For instance, if you are operating in a graphical environment with graphical.target active, you can shut down the graphical system and put the system into a multi-user command line state by isolating the multi-user.target. Since graphical.target depends on multi-user.target but not the other way around, all of the graphical units will be stopped.

You may wish to take a look at the dependencies of the target you are isolating before performing this procedure to ensure that you are not stopping vital services:

  • systemctl list-dependencies multi-user.target
When you are satisfied with the units that will be kept alive, you can isolate the target by typing:

  • sudo systemctl isolate multi-user.target
Using Shortcuts for Important Events
There are targets defined for important events like powering off or rebooting. However, systemctl also has some shortcuts that add a bit of additional functionality.

For instance, to put the system into rescue (single-user) mode, you can just use the rescue command instead of isolate rescue.target:

  • sudo systemctl rescue
This will provide the additional functionality of alerting all logged in users about the event.

To halt the system, you can use the halt command:

  • sudo systemctl halt
To initiate a full shutdown, you can use the poweroff command:

  • sudo systemctl poweroff
A restart can be started with the reboot command:

  • sudo systemctl reboot
These all alert logged in users that the event is occurring, something that only running or isolating the target will not do. Note that most machines will link the shorter, more conventional commands for these operations so that they work properly with systemd.

For example, to reboot the system, you can usually type:

  • sudo reboot
Conclusion
By now, you should be familiar with some of the basic capabilities of the systemctl command that allow you to interact with and control your systemd instance. The systemctl utility will be your main point of interaction for service and system state management.

While systemctl operates mainly with the core systemd process, there are other components to the systemd ecosystem that are controlled by other utilities. Other capabilities, like log management and user sessions are handled by separate daemons and management utilities (journald/journalctl and logind/loginctl respectively). Taking time to become familiar with these other tools and daemons will make management an easier task.
 
Bên trên