Subsections of Pentest, IR and Forensics

Penetration Testing

What is Penetration Testing?

“Penetration testing is security testing in which assessors mimic real-world attacks to identify methods for circumventing the security features of an application, system, or network. It often involves launching real attacks on real systems and data that use tools and techniques commonly used by attackers.”

Operating Systems

Desktop Mobile
Windows iOS
Unix Android
Linux Blackberry OS
macOS Windows Mobile
ChromeOS WebOS
Ubuntu Symbian OS

Approaches

  1. Internal vs. external
  2. Web and mobile application assessments
  3. Social Engineering
  4. Wireless Network, Embedded Device & IoT
  5. ICS (Industry Control Systems) penetration

General Methodology

  • Planning
  • Discovery
  • Attack
  • Report

Penetration Testing Phases

Penetration Testing – Planning

  • Setting Objectives
  • Establishing Boundaries
  • Informing Need-to-know employees

Penetration Testing – Discovery

Vulnerability analysis

Vulnerability scanning can help identify outdated software versions, missing patches, and misconfigurations, and validate compliance with or deviations from an organization’s security policy. This is done by identifying the OSes and major software applications running on the hosts and matching them with information on known vulnerabilities stored in the scanners’ vulnerability databases.

Dorks

A Google Dork query, sometimes just referred to as a dork, is a search string that uses advanced search operators to find information that is not readily available on a website.

What Data Can We Find Using Google Dorks?

  • Admin login pages
  • Username and passwords
  • Vulnerable entities
  • Sensitive documents
  • Govt/military data
  • Email lists
  • Bank Account details and lots more…

Passive vs. Active Record

Passive Active
Monitoring employees Network Mapping
Listening to network traffic Port Scanning
Password cracking

Social Engineering

“Social Engineering is an attempt to trick someone into revealing information (e.g., a password) that can be used to attack systems or networks. It is used to test the human element and user awareness of security, and can reveal weaknesses in user behavior.”

Scanning Tools

  • Network Mapper → NMAP
  • Network Analyzer and Profiler → WIRESHARK
  • Password Crackers → JOHNTHERIPPER
  • Hacking Tools → METASPLOIT

Passive Online

  • Wire sniffing
  • Man in the Middle
  • Replay Attack

Active Online

  • Password Guessing
  • Trojan/spyware/keyloggers
  • Hah injection
  • Phishing

Offline Attacks

  • Pre-computed Hashes
    • Data structures that use a hash function to store, order, and/or access data in an array.
  • Distributed Network Attack (DNA)
    • DNA is a password cracking system sold by AccessData.
    • DNA can perform brute-force cracking of 40-bit RC2/RC4 keys. For longer keys, DNA can attempt password cracking. (It’s computationally infeasible to attempt a brute-force attack on a 128-bit key.)
    • DNA can mine suspect’s hard drive for potential passwords.
  • Rainbow Tables
    • A rainbow table is a pre-computed table for reversing cryptographic hash functions, usually for cracking password hashes.

Tech-less Discovery

  • Social Engineering
  • Shoulder surfing
  • Dumpster Diving

Penetration Testing – Attack

“While vulnerability scanners check only for the possible existence of a vulnerability, the attack phase of a penetration test exploits the vulnerability to confirm its existence.”

Penetration Testing Penetration Testing

Types of Attack Scenarios

  1. White Box Testing: In this type of testing, the penetration tester has full access to the target system and all relevant information, including source code, network diagrams, and system configurations. This type of testing is also known as “full disclosure” testing and is typically performed during the planning phase of penetration testing.
  2. Grey Box Testing: In this type of testing, the penetration tester has partial access to the target system and some knowledge of its internal workings, but not full access or complete knowledge. This type of testing is typically performed during the Discovery phase of penetration testing.
  3. Black Box Testing: In this type of testing, the penetration tester has no prior knowledge or access to the target system and must rely solely on external observations and testing to gather information and identify vulnerabilities. This type of testing is also known as “blind” testing and is typically performed during the Attack phase of penetration testing.

Exploited Vulnerabilities

Penetration Testing Penetration Testing

Penetration Testing – Reporting

Executive Summary

“This section will communicate to the reader the specific goals of the Penetration Test and the high level findings of the testing exercise.”

  • Background
  • Overall Posture
  • Risk Ranking
  • General Findings
  • Recommendations
  • Roadmap

Technical Review

Introduction

  • Personnel involved

  • Contact information

  • Assets involved in testing

  • Objectives of Test

  • Scope of test

  • Strength of test

  • Approach

  • Threat/Grading Structure

    Scope

  • Information gathering

  • Passive intelligence

  • Active intelligence

  • Corporate intelligence

  • Personnel intelligence

    Vulnerability Assessment In this section, a definition of the methods used to identify the vulnerability as well as the evidence/classification of the vulnerability should be present.

    Vulnerability Confirmation This section should review, in detail, all the steps taken to confirm the defined vulnerability as well as the following:

  • Exploitation Timeline

  • Targets selected for Exploitation

  • Exploitation Activities

    Post Exploitation

  • Escalation path

  • Acquisition of Critical Information

  • Value of information Access to core business systems

  • Access to compliance protected data sets

  • Additional information/systems accessed

  • Ability of persistence

  • Ability for exfiltration

  • Countermeasure

  • Effectiveness

    Risk/Exposure This section will cover the business risk in the following subsection:

  • Evaluate incident frequency

  • Estimate loss magnitude per incident

  • Derive Risk

Penetration Testing Tools

  • Kali Linux
  • NMAP (Network Scanner)
  • JohnTheRipper (Password cracking tool)
  • MetaSploit
  • Wireshark (Packet Analyzer)
  • HackTheBox (Testing playground)
  • LameWalkThrough (Testing playground)

Incident Response

What is Incident Response?

“Preventive activities based on the results of risk assessments can lower the number of incidents, but not all incidents can be prevented. An incident response is therefore necessary for rapidly detecting incidents, minimizing loss and destruction, mitigating the weaknesses that were exploited, and restoring IT services.”

Events

“An event can be something as benign and unremarkable as typing on a keyboard or receiving an email.”

In some cases, if there is an Intrusion Detection System(IDS), the alert can be considered an event until validated as a threat.

Incident

“An incident is an event that negatively affects IT systems and impacts on the business. It’s an unplanned interruption or reduction in quality of an IT service.”

An event can lead to an incident, but not the other way around.

Why Incident Response is Important

One of the benefit of having an incident response is that it supports responding to incidents systematically so that the appropriate actions are taken, it helps personnel to minimize loss or theft of information and disruption of services caused by incidents, and to use information gained during incident handling to better prepare for handling future incidents.

IR Team Models

  • Central teams
  • Distributed teams
  • Coordinating teams

Coordinating Teams

Incident don’t occur in a vacuum and can have an impact on multiple parts of a business. Establish relationships with the following teams: Incident Response Incident Response

Common Attack Vectors

Organization should be generally prepared to handle any incident, but should focus on being prepared to handle incident that use common attack vectors:

  1. External/Removable Media
  2. Attrition
  3. Web
  4. Email
  5. Impersonation
  6. Loss or theft of equipment

Baseline Questions

Knowing the answers to these will help your coordination with other teams and the media.

  • Who attacked you? Why?
  • When did it happen? How did it happen?
  • Did this happen because you have poor security processes?
  • How widespread is the incident?
  • What steps are you taking to determine what happened and to prevent future occurrences?
  • What is the impact of the incident?
  • Was any PII exposed?
  • What is the estimated cost of this incident?

Incident Response Phases

Incident Response Incident Response

Incident Response Process

Incident Response Preparation

Incident Response Policy

IR Policy needs to cover the following: IR Team

  • The composition of the incident response team within the organization. Roles
  • The role of each of the team members. Means, Tools, Resources
  • The technological means, tools, and resources that will be used to identify and recover compromised data. Policy Testing
  • The persons responsible for testing the policy. Action Plan
  • How to put the policy into the action?

Resources

Incident Handler Communications and Facilities:

  • Contact information

  • On-call information

  • Incident reporting mechanisms

  • Issue tracking system

  • Smartphones

  • Encryption software

  • War room

  • Secure storage facility

    Incident Analysis Hardware and Software:

  • Digital forensic workstations and/or backup devices

  • Laptops

  • Spare workstations, servers, and networking equipment

  • Blank removable media

  • Portable printer

  • Packet sniffers and protocol analyzers

  • Digital forensic software

  • Removable media

  • Evidence gathering accessories

    Incident Analysis Resources:

  • Port lists

  • Documentation

  • Network diagrams and lists of critical assets

  • Current baselines

  • Cryptographic hashes

The Best Defense

“Keeping the number of incidents reasonably low is very important to protect the business processes of the organization. It security controls are insufficient, higher volumes of incidents may occur, overwhelming the incident response team.”

So the best defense is:

  • Periodic Risk Assessment
  • Hardened Host Security
  • Whitelist based Network Security
  • Malware prevention systems
  • User awareness and training programs

Checklist

  • Are all members aware of the security policies of the organization?
  • Do all members of the Computer Incident Response Team know whom to contact?
  • Do all incident responders have access to journals and access to incident response toolkits to perform the actual incident response process?
  • Have all members participated in incident response drills to practice the incident response process and to improve overall proficiency on a regularly established basis?

Incident Response Detection and Analysis

Precursors and Indicators

Precursors

  • A precursor is a sign that an incident may occur in the future.
    • Web server log entries that show the usage of a vulnerability scanner.

    • An announcement of a new exploit that targets a vulnerability of the organization’s mail server.

    • A threat from a group stating that the group will attack the organization.

      Indicators

  • An indicator is a sing that an incident may have occurred or may be occurring now.
    • Antivirus software alerts when it detects that a host is infected with malware.
    • A system admin sees a filename with unusual characters.
    • A host records an auditing configuration change in its log.
    • An application logs multiple failed login attempts from an unfamiliar remote system.
    • An email admin sees many bounced emails with suspicious content.
    • A network admin notices an unusual deviation from typical network traffic flows.

Monitoring Systems

  • Monitoring systems are crucial for early detection of threats.

  • These systems are not mutually exclusive and still require an IR team to document and analyze the data.

    IDS vs. IPS Both are parts of the network infrastructure. The main difference between them is that IDS is a monitoring system, while IPS is a control system.

    DLP Data Loss Prevention (DLP) is a set of tools and processes used to ensure that sensitive data is not lost, misused, or accessed by unauthorized users.

    SIEM Security Information and Event Management solutions combine Security Event Management (SEM) – which carries out analysis of event and log data in real-time, with Security Information Management (SIM).

Documentation

Regardless of the monitoring system, highly detailed, thorough documentation is needed for the current and future incidents.

  • The current status of the incident
  • A summary of the incident
  • Indicators related to the incident
  • Other incident related to this incident
  • Actions taken by all incident handlers on this incident.
  • Chain of custody, if applicable
  • Impact assessments related to the incident
  • Contact information for other involved parties
  • A list of evidence gathered during the incident investigation
  • Comments from incident handlers
  • Next steps to be taken (e.g., rebuild the host, upgrade an application)

Functional Impact Categories

Incident Response Incident Response

Information Impact Categories

Incident Response Incident Response

Recoverability Effort Categories

Incident Response Incident Response

Notifications

  • CIO
  • Local and Head of information security
  • Other incident response teams within the organization
  • External incident response teams (if appropriate)
  • System owner
  • Human resources
  • Public affairs
  • Legal department
  • Law enforcement (if appropriate)

Containment, Eradication & Recovery

Containment

“Containment is important before an incident overwhelms resources or increases damage. Containment strategies vary based on the type of incident. For example, the strategy for containing an email-borne malware infection is quite different from that of a network-based DDoS attack.”

An essential part of containment is decision-making. Such decisions are much easier to make if there are predetermined strategies and procedures for containing the incident.

  1. Potential damage to and theft of resources
  2. Need for an evidence preservation
  3. Service availability
  4. Time and resources needed to implement the strategy
  5. Effectiveness of the strategy
  6. Duration of the solution

Forensics in IR

“Evidence should be collected to procedures that meet all applicable laws and regulations that have been developed from previous discussions with legal staff and appropriate law enforcement agencies so that any evidence can be admissible in court.” — NIST 800-61

  1. Capture a backup image of the system as-is
  2. Gather evidence
  3. Follow the Chain of custody protocols

Eradication and Recovery

  1. After an incident has been contained, eradication may be necessary to eliminate components of the incident, such as deleting malware and disabling breached user accounts, as well as identifying and mitigating all vulnerabilities that were exploited.
  2. Recovery may involve such actions as restoring systems from clean backups, rebuilding systems from scratch, replacing compromised files with clean versions, installing patches, changing passwords, and tightening network perimeter security.
  3. A high level of testing and monitoring are often deployed to ensure restored systems are no longer impacted by the incident. This could take weeks or months, depending on how long it takes to bring back compromised systems into production.

Checklist

  • Can the problem be isolated? Are all affected systems isolated from non-affected systems? Have forensic copies of affected systems been created for further analysis?
  • If possible, can the system be reimaged and then hardened with patches and/or other countermeasures to prevent or reduce the risk of attacks? Have all malware and other artifacts left behind by the attackers been removed, and the affected systems hardened against further attacks?
  • What tools are you going to use to test, monitor, and verify that the systems being restored to productions are not compromised by the same methods that cause the original incident?

Post Incident Activities

Holding a “lessons learned” meeting with all involved parties after a major incident, and optionally periodically after lesser incidents as resources permit, can be extremely helpful in improving security measures and the incident handling process itself.

Lessons Learned

  • Exactly what happened, and at what times?
  • How well did staff and management perform in dealing with the incident? Were the documented procedures followed? Were they adequate?
  • What information was needed sooner?
  • Were any steps or actions taken that might have inhibited the recovery?
  • What would the staff and management do differently the next time a similar incident occurs?
  • How could information sharing with other organizations have been improved?
  • What corrective actions can prevent similar incidents in the future?
  • What precursors or indicators should be watched in the future to detect the similar incidents?

Other Activities

  • Utilizing data collected
  • Evidence Retention
  • Documentation

Digital Forensics

Forensics Overview

What are Forensics?

“Digital forensics, also known as computer and network forensics, has many definitions. Generally, it is considered the application of science to the identification, collection, examination, and analysis of data while preserving the integrity of the information and maintaining a strict chain of custody for the data.”

Types of Data

The first step in the forensic process is to identify potential sources of data and acquire data from them. The most obvious and common sources of data are desktop computers, servers, network storage devices, and laptops.

  • CDs/DVDs
  • Internal & External Drives
  • Volatile data
  • Network Activity
  • Application Usage
  • Portable Digital Devices
  • Externally Owned Property
  • Computer at Home Office
  • Alternate Sources of Data
  • Logs
  • Keystroke Monitoring

The Need for Forensics

  • Criminal Investigation
  • Incident Handling
  • Operational Troubleshooting
  • Log Monitoring
  • Data Recovery
  • Data Acquisition
  • Due Diligence/Regulatory Compliance

Objectives of Digital Forensics

  • It helps to recover, analyze, and preserve computer and related materials in such a manner that it helps the investigation agency to present them as evidence in a court of law. It helps to postulate the motive behind the crime and identity of the main culprit.
  • Designing procedures at a suspected crime scene, which helps you to ensure that the digital evidence obtained is not corrupted.
  • Data acquisition and duplication: Recovering deleted files and deleted partitions from digital media to extract the evidence and validate them.
  • Help you to identify the evidence quickly, and also allows you to estimate the potential impact of the malicious activity on the victim.
  • Producing a computer forensic report, which offers a complete report on the investigation process.
  • Preserving the evidence by following the chain of custody.

Forensic Process – NIST

Collection Identify, label, record, and acquire data from the possible sources, while preserving the integrity of the data.

Examination Processing large amounts of collected data to assess and extract of particular interest.

Analysis Analyze the results of the examination, using legally justifiable methods and techniques.

Reporting Reporting the results of the analysis.

The Forensic Process

Data Collection and Examination

Examination

Steps to Collect Data

Develop a plan to acquire the data Create a plan that prioritizes the sources, establishing the order in which the data should be acquired.

Acquire the Data Use forensic tools to collect the volatile data, duplicate non-volatile data sources, and securing the original data sources.

Verify the integrity of the data Forensic tools can create hash values for the original source, so the duplicate can be verified as being complete and untampered with.

Overview of Chain of Custody

A clearly defined chain of custody should be followed to avoid allegations of mishandling or tampering of evidence. This involves keeping a log of every person who had physical custody of the evidence, documenting the actions that they performed on the evidence and at what time, storing the evidence in a secure location when it is not being used, making a copy of the evidence and performing examination and analysis using only the copied evidence, and verifying the integrity of the original and copied evidence.

Examination

Bypassing Controls OSs and applications may have data compression, encryption, or ACLs.

A Sea of Data Hard drives may have hundreds of thousands of files, not all of which are relevant.

Tools There are various tools and techniques that exist to help filter and exclude data from searches to expedite the process.

Analysis & Reporting

Analysis

“The analysis should include identifying people, places, items, and events, and determining how these elements are related so that a conclusion can be reached.”

Putting the pieces together

Coordination between multiple sources of data is crucial in making a complete picture of what happened in the incident. NIST provides the example of an IDS log linking an event to a host. The host audit logs linking the event to a specific user account, and the host IDS log indicating what actions that user performed.

Writing your forensic report

A case summary is meant to form the basis of opinions. While there are a variety of laws that relate to expert reports, the general rules are:

  • If it is not in your report, you cannot testify about it.
  • Your report needs to detail the basis for your conclusions.
  • Detail every test conducted, the methods and tools used, and the results.

Report Composition

  1. Overview/Case Summary
  2. Forensic Acquisition & Examination Preparation
  3. Finding & report (analysis)
  4. Conclusion

SANS Institute Best Practices

  1. Take Screenshots
  2. Bookmark evidence via forensic application of choice
  3. Use built-in logging/reporting options within your forensic tool
  4. Highlight and exporting data items into .csv or .txt files
  5. Use a digital audio recorder vs. handwritten notes when necessary

Forensic Data

Data Files

What’s not there

Deleted files When a file is deleted, it is typically not erased from the media; instead, the information in the directory’s data structure that points to the location of the file is marked as deleted.

Slack Space If a file requires less space than the file allocation unit size, an entire file allocation unit is still reserved for the file.

Free Space Free space is the area on media that is not allocated to any partition, the free space may still contain pieces of data.

MAC data

It’s important to know as much information about relevant files as possible. Recording the modification, access, and creation times of files allows analysts to help establish a timeline of the incident.

  1. Modification Time
  2. Access Time
  3. Creation Time
Logical Backup Imaging
A logical data backup copies the directories and files of a logical volume. It does not capture other data that may be present on the media, such as deleted files or residual data stored in slack space. Generates a bit-for-bit copy of the original media, including free space and slack space. Bit stream images require more storage space and take longer to perform than logical backups.
Can be used on live systems if using a standard backup software If evidence is needed for legal or HR reasons, a full bit stream image should be taken, and all analysis done on the duplicate
May be resource intensive Disk-to-disk vs Disk-to-File
Should not be use on a live system since data is always chaning

Tools for Techniques

Many forensic products allow the analyst to perform a wide range of processes to analyze files and applications, as well as collecting files, reading disk images, and extracting data from files.

  • File Viewers
  • Uncompressing Files
  • GUI for Data Structure
  • Identifying Known Files
  • String Searches & Pattern Matches
  • Metadata

Operating System Data

“OS data exists in both non-volatile and volatile states. Non-volatile data refers to data that persists even after a computer is powered down, such as a filesystem stored on a hard drive. Volatile data refers to data on a live system that is lost after a computer is powered down, such as the current network connections to and from the system.”

Volatile Non-Volatile
Slack Space Configuration Files
Free Space Logs
Network configuration/connections Application files
Running processes Data Files
Open Files Swap Files
Login Sessions Dump Files
Operating System Time Hibernation Files
Temporary Files

Collection & Prioritization of Volatile Data

  1. Network Connections
  2. Login Sessions
  3. Contents of Memory
  4. Running Processes
  5. Open Files
  6. Network Configuration
  7. Operating System Time

Collecting Non-Volatile Data

  1. Consider Power-Down Options
  2. File System Data Collected
  3. Users and Groups
  4. Passwords
  5. Network Shares
  6. Logs

Logs

Other logs can be collected depending on the incident under analysis:

  • In case of a network hack: Collect logs of all the network devices lying in the route of the hacked devices and the perimeter router (ISP router). Firewall rule base may also be required in this case.
  • In case it is unauthorized access: Save the web server logs, application server logs, application logs, router or switch logs, firewall logs, database logs, IDS logs etc.
  • In case of a Trojan/Virus/Worm attack: Save the antivirus logs apart from the event logs (pertaining to the antivirus).

Windows

  • The file systems used by Windows include FAT, exFAT, NTFS, and ReFS.

    Investigators can search out evidence by analyzing the following important locations of the Windows:

  • Recycle Bin

  • Registry

  • Thumbs.db

  • Files

  • Browser History

  • Print Spooling

macOS

  • Mac OS X is the UNIX bases OS that contains a Mach 3 microkernel and a FreeBSD based subsystem. Its user interface is Apple like, whereas the underlying architecture is UNIX like.
  • Mac OS X offers novel techniques to create a forensic duplicate. To do so, the perpetrator’s computer should be placed into a “Target Disk Mode”. Using this mode, the forensic examiner creates a forensic duplicate of the perpetrator’s hard disk with the help of a FireWire cable connection between the two PCs.

Linux

Linux can provide an empirical evidence of if the Linux embedded machine is recovered from a crime scene. In this case, forensic investigators should analyze the following folders and directories.

  • /etc[%SystemRoot%/System32/config]
  • /var/log
  • /home/$USER
  • /etc/passwd

Application Data

OSs, files, and networks are all needed to support applications: OSs to run the applications, networks to send application data between systems, and files to store application data, configuration settings, and the logs. From a forensic perspective, applications bring together files, OSs, and networks. — NIST 800-86

Application Components

  • Config Settings
    • Configuration file
    • Runtime Options
    • Added to Source Code
  • Authentication
    • External Authentication
    • Proprietary Authentication
    • Pass-through authentication
    • Host/User Environment
  • Logs
    • Event
    • Audit
    • Error
    • Installation
    • Debugging
  • Data
    • Can live temporary in memory and/or permanently in files
    • File format may be generic or proprietary
    • Data may be stored in databases
    • Some applications create temp files during session or improper shutdown
  • Supporting Files
    • Documentation
    • Links
    • Graphics
  • App Architecture
    • Local
    • Client/Server
    • Peer-to-Peer

Types of Applications

Certain of application are more likely to be the focus of forensic analysis, including email, Web usage, interactive messaging, file-sharing, document usage, security applications, and data concealment tools.

Digital Forensics Digital Forensics

Email

“From end to end, information regarding a single email message may be recorded in several places – the sender’s system, each email server that handles the message, and the recipient’s system, as well as the antivirus, spam, and content filtering server.” — NIST 800-45

Web Usage

Web Data from Host Web Data from Server
Typically, the richest sources of information regarding web usage are the hosts running the web browsers. Another good source of web usage information is web servers, which typically keep logs of the requests they receive.
Favorite websites Timestamps
History w/timestamps of websites visited IP Addresses
Cached web data files Web browesr version
Cookies Type of request
Resource requested

Collecting the Application Data

Overview

Digital Forensics Digital Forensics

Network Data

“Analysts can use data from network traffic to reconstruct and analyze network-based attacks and inappropriate network usage, as well as to troubleshoot various types of operational problems. The term network traffic refers to computer network communications that are carried over wired or wireless networks between hosts.” — NIST 800-86

TCP/IP

Digital Forensics Digital Forensics

Sources of Network Data

These sources collectively capture important data from all four TCP/IP layers.

Digital Forensics Digital Forensics

Data Value

  • IDS Software
  • SEM Software
  • NFAT Software (Network Forensic Analysis Tool)
  • Firewall, Routers, Proxy Servers, & RAS
  • DHCP Server
  • Packet Sniffers
  • Network Monitoring
  • ISP Records

Attacker Identification

“When analyzing most attacks, identifying the attacker is not an immediate, primary concern: ensuring that the attack is stopped and recovering systems and data are the main interests.” — NIST 800-86

  1. Contact IP Address Owner: Can help identify who is responsible for an IP address, Usually an escalation.
  2. Send Network Traffic: Not recommended for organizations
  3. Application Content: Data packets could contain information about the attacker’s identity.
  4. Seek ISP Assistance: Requires court order and is only done to assist in the most serious of attacks.
  5. History of IP address: Can look for trends of suspicious activity.

Introduction to Scripting

Scripting Overview

History of Scripting

  • IBM’s Job Control Language (JCL) was the first scripting language.
  • Many batch jobs require setup, with specific requirements for main storage, and dedicated devices such as magnetic tapes, private disk volumes, and printers set up with special forms.
  • JCL was developed as a means of ensuring that all required resources are available before a job is scheduled to run.
  • The first interactive shell was developed in the 1960s.
  • Calvin Mooers in his TRAC language is generally credited with inventing command substitution, the ability to embed commands in scripts that when interpreted insert a character string into the script.
  • One innovation in the UNIX shells was the ability to send the output of one program into the input of another, making it possible to do complex tasks in one line of shell code.

Script Usage

  • Scripts have multiple uses, but automation is the name of the game.
  • Image rollovers
  • Validation
  • Backup
  • Testing

Scripting Concepts

  • Scripts
    • Small interpreted programs
    • Script can use functions, procedures, external calls, variables, etc.
  • Variables
  • Arguments/Parameters
    • Parameters are pre-established variables which will be used to perform the related process of our function.
  • If Statement
  • Loops
    • For Loop
    • While Loop
    • Until Loop

Scripting Languages

JavaScript

  • Object-oriented, developed in 1995 by Netscape communications.
  • Server or client side use, most popular use is client side.
  • Supports event-driven functional, and imperative programming styles. It has APIs for working with text, arrays, dates, regular expression, and the DOM, but the language itself doesn’t include any I/O, such as networking, storage, or graphics facilities. It relies upon the host environment in which it is embedded to provide these features.

Bash

  • UNIX shell and command language, written by Brian Fox for the GNU project as a free software replacement for the Bourne shell.
  • Released in 1989.
  • Default login shell for most Linux distros.
  • A command processor typically runs in a text window, but can also read and execute commands from a file.
  • POSIX compliant

Perl

  • Larry Wall began work on Perl in 1987.
  • Version 1.0 released on Dec 18, 1987.
  • Perl2 – 1988
  • Perl3 – 1989
  • Originally, the only documentation for Perl was a single lengthy man page.
  • Perl4 – 1991

PowerShell

  • Task automation and configuration management framework
  • Open-sourced and cross-platformed on 18 August 2016 with the introduction of PowerShell Core. The former is built on .NET Framework, while the latter on .NET Core.

Binary

Binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol used is often “0” and “1” from the binary number system.

Adding a binary payload to a shell script could, for instance, be used to create a single file shell script that installs your entire software package, which could be composed of hundreds of files.

Hex

Advanced hex editors have scripting systems that let the user create macro like functionality as a sequence of user interface commands for automating common tasks. This can be used for providing scripts that automatically patch files (e.g., game cheating, modding, or product fixes provided by the community) or to write more complex/intelligent templates.

Python Scripting

Benefits of Using Python

  • Open Source
  • Easy to learn and implement
  • Portable
  • High level
  • Can be used for almost anything in cybersecurity
  • Extensive libraries

Python Libraries

Introduction to Scripting Introduction to Scripting