The success of an app highly depends on its security. Users want safe app environments where they can interact with each other. Therefore, developers need to deliver digital solutions with app security in mind.
The Hypertext Transfer Protocol Secure is designed for secure communications over computer networks as well as the internet, protected by encryption using Transport Layer Security (TLS) and, Secure Socket Layer (SSL).
Different types of attackers, namely hackers, may leverage their technical expertise to infiltrate protected systems. Another kind of attacker is a social engineer. Social engineers exploit weaknesses of human psychology to trick people into offering them access to personal information.
Phishing is a form of social engineering where an attacker learns a user’s personal information such as login credentials or private information. In a phishing attack, the attacker pretends to be a reputable entity via email or another communication channel and installs malware through a link or attachment.
Another type of threat is a man-in-the-middle (MITM) attack. An MITM attack may intercept communications between two parties, such as between a mobile app and a database full of information. The attacker can then eavesdrop or manipulate communications to cause harm or bypass other security measures on either side of the connection.
App owners should always protect their apps with HTTPS, even if they don’t handle sensitive communications. HTTPS is a requirement for new browser features. Unprotected HTTP requests can reveal information about the behaviors and identities of users.
Code obfuscation creates source or machine code that’s difficult for potential hackers to understand. Developers uses these to conceal the purpose, logic, or implicit values embedded in it. Code obfuscation may include:
- encrypting some or all of the code;
- stripping out potentially revealing metadata;
- renaming useful class and variable names to meaningless labels;
- adding unused or meaningless code to an application’s binary.
Code is often obfuscated to protect intellectual property and prevent an attacker from reverse engineering a software program. In iOS, code obfuscation isn’t so widespread because libraries are closed, not public as they are for Android. For this reason, an attacker can hardly get source code from iOS libraries. If a library’s source code is public, code obfuscation can be used.
By making an application much more difficult to reverse engineer, a developer can protect it against:
- theft of trade secrets (intellectual property);
- unauthorized access;
- bypassing licensing or other controls;
- discovery of vulnerabilities.
Writers of malicious code disguise their code’s true purpose also utilise obfuscation to prevent their malware from being detected by signature-based antimalware tools. Deobfuscation techniques, such as program slicing, can sometimes be used to reverse engineer obfuscated code.
How code obfuscation work
Code obfuscation comprises of different techniques, which can be combined to create a more complex and comprehensive defense against attackers. Some examples of obfuscation are:
Renaming. Renaming alters the names of methods and variables. It makes the decompiled source harder for a human to understand but doesn’t alter program execution. The new names can utilize different schemes: letters (A, B, C), numbers, unprintable characters, or even invisible characters. Name obfuscation is a basic transformation that’s used by most .NET, iOS, Java, and Android obfuscators. For example, there can be X number of A variables in the source code. There can also be other variables like C or B interconnected with each other in the source code. This complicates the logic, making it much harder for hackers to understand when they are deciphering the source code.
Control flow. Synthesized conditional, branching, and iterative constructs produce valid executable logic but yield non-deterministic semantic results when decompiled. This makes decompiled code look like spaghetti logic, which is very difficult for a hacker to comprehend. However, this technique slows down the runtime performance.
Instruction pattern transformation. This technique converts common instructions created by the compiler to other, less obvious constructs. These are perfectly legal machine language instructions that may not map cleanly to high-level languages such as Java or C#. An example is transient variable caching, which leverages the stack-based nature of the Java and .NET runtimes.
Dummy code insertion. Code can be inserted into the executable that doesn’t affect the logic of the program but breaks decompilers or makes reverse engineered code much more difficult to analyze.
Unused code and metadata removal. Removing debugging information, non-essential metadata, and used code from applications makes them smaller and reduces the information available to an attacker. This procedure may slightly improve runtime performance.
Binary linking/merging. This technique combines multiple input executables/libraries into one or more output binaries. Linking can be used to make an application smaller, especially when used with renaming and pruning. It can simplify deployment scenarios, and it may reduce information available to hackers.
Opaque predicate insertion. This works by adding conditional branches that always evaluate to known results — results that cannot easily be determined via static analysis. This is a way of introducing potentially incorrect code that will never actually be executed but is confusing to attackers who are trying to understand the decompiled output.
Anti-tamper. An obfuscator can inject application self-protection into the source code to verify that an application hasn’t been tampered. If tampering is detected, the application can shut itself down, limit its functionality or any other custom action.
Anti-debug. When a hacker is trying to infiltrate or counterfeit an app, steal its data, or alter the behaviour in a software, they’ll usually begin with reverse engineering and entering with a debugger. An obfuscator can layer in application self-protection by injecting code to detect if the production application is executing within a debugger. If a debugger is used, it can corrupt sensitive data, invoke random crashes, or even send a message to a service to provide a warning signal.
Here are some of the more prominent security threats in apps, especially for those that are personalised based on user accounts. It is essential for app developers to take note of these potential threats when it comes to app updates and testing. For those looking to build a mobile app, or looking for a mobile app developer in Singapore, these security points are also crucial to look out for!