вторник, 25 ноември 2014 г.

Linux Bridge to act like hub

If in need to make a linux bridge very stupid and act like hub:

brctl setageing <bridgename> 0 




This command tells Linux to forget every MAC address that it sees on
the bridge, making it act as a hub.



Lets say you have a vmbr0 with eth0 and tap0 in it and have a VM started with attached tap0 (or you choose it as vmbr0). If you link eth0 to your switch's mirroring port you wont RX a great deal of the traffic because usually linux bridge acts as switch and does not recognize the VM as endpoint for the traffic.

Above command will give you a way to have a VM that listens a mirrored port (so NIDS could be on VM).

You are welcome!

сряда, 3 септември 2014 г.

Какво правим когато забравим да пуснем програмата в screen?

   reptyr is a utility for taking an existing running program and attaching it to a new terminal. Started a long-running process over ssh, but have to leave and don't want to interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session and head on home.

Source
Info



neercs is a work-in-progress libcaca project.
Like GNU screen, it allows you to detach a session from a terminal, but provides unique features:
  • Grabbing a process that you forgot to start inside neercs
  • Great screensaver
  • 3D rotating cube to switch between full screen terms
  • Real time thumbnails of your shells
  • Special effects when closing a window
  • Various window layouts...
neercs was written by Sam Hocevar, Jean-Yves Lamoureux and Pascal Terjan. It is free software, and can be used, modified and distributed under the terms of the Do What The Fuck You Want To Public License.



четвъртък, 29 май 2014 г.

[REPOST] Using Facebook Notes to DDoS any website

Facebook Notes allows users to include <img> tags. Whenever a <img> tag is used, Facebook crawls the image from the external server and caches it. Facebook will only cache the image once however using random get parameters the cache can be by-passed and the feature can be abused to cause a huge HTTP GET flood.

Steps to re-create the bug as reported to Facebook Bug Bounty on March 03, 2014.

Step 1. Create a list of unique img tags as one tag is crawled only once

<img src=http://targetname/file?r=1></img>
<img src=http://targetname/file?r=2></img>
        ..
<img src=http://targetname/file?r=1000></img>
 
Step 2. Use m.facebook.com to create the notes. It silently truncates the notes to a fixed length.
 
Step 3. Create several notes from the same user or different user. Each note is now responsible for 1000+ http request.

Step 4. View all the notes at the same time. The target server is observed to have massive http get flood. Thousands of get request are sent to a single server in a couple of seconds. Total number of facebook servers accessing in parallel is 100+.

Initial Response: Bug was denied as they misinterpreted the bug would only cause a 404 request and is not capable of causing high impact.
After exchanging few emails I was asked to prove if the impact would be high. I fired up a target VM on the cloud and using only browsers from three laptops I was able to achieve 400+ Mbps outbound traffic for 2-3 hours.

Number of Facebook Servers: 127

Of course, the impact could be more than 400 Mbps as I was only using browser for this test and was limited by the number of browser thread per domain that would fetch the images. I created a proof-of-concept script that could cause even greater impact and sent the script along with the graph to Facebook.

On April 11, I got a reply that said
Thank you for being patient and I apologize for the long delay here. This issue was discussed, bumped to another team, discussed some more, etc.
In the end, the conclusion is that there’s no real way to us fix this that would stop “attacks” against small consumer grade sites without also significantly degrading the overall functionality.
Unfortunately, so-called “won’t fix” items aren’t eligible under the bug bounty program, so there won’t be a reward for this issue. I want to acknowledge, however, both that I think your proposed attack is interesting/creative and that you clearly put a lot of work into researching and reporting the issue last month. That IS appreciated and we do hope that you’ll continue to submit any future security issues you find to the Facebook bug bounty program.
I’m not sure why they are not fixing this. Supporting dynamic links in image tags could be a problem and I’m not a big fan of it. I think a manual upload would satisfy the need of users if they want to have dynamically generated image on the notes.

I also see a couple of other problems with this type of abuse:
  • A scenario of traffic amplification: when the image is replaced by a pdf or video of larger size, Facebook would crawl a huge file but the user gets nothing.
  • Each Note supports 1000+ links and Facebook blocks a user after creating around 100 Notes in a short span. Since there is no captcha for note creation, all of this can be automated and an attacker could easily prepare hundreds of notes using multiple users until the time of attack when all of them is viewed at once.
Although a sustained 400 Mbps could be dangerous, I wanted to test this one last time to see if it can indeed have a larger impact.
Getting rid of the browser and using the poc script I was able to get ~900 Mbps outbound traffic.

I was using an ordinary 13 MB PDF file which was fetched by Facebook 180,000+ times, number of Facebook servers involved was 112.

We can see the traffic graph is almost constant at 895 Mbps. This might be because of the maximum traffic imposed on my VM on the cloud which is using a shared Gbps ethernet port. It seems there is no restriction put on Facebook servers and with so many servers crawling at once we can only imagine how high this traffic can get.

After finding and reporting this issue, I found similar issues with Google which I blogged here. Combining Google and Facebook, it seems we can easily get multiple Gbps of GET Flood.

Facebook crawler shows itself as facebookexternalhit. Right now it seems there is no other choice than to block it in order to avoid this nuisance.

[Update1]

https://developers.facebook.com/docs/ApplicationSecurity/ mentions a way to get the list of IP addresses that belongs to Facebook crawler.

whois -h whois.radb.net ‘-i origin AS32934| grep ^route
 
Blocking the IP addresses could be more effective than blocking the useragent.

I’ve been getting a lot of response on the blog and would like to thank the DOSarrest team for acknowledging the finding with an appreciation token.

[Update 2]

POC scripts and access log can now be accessed from Github. The script is very simple and is a mere rough draft. Please use them for research and analysis purposes only.

The access logs are the exact logs I used for ~900 Mbps test. In the access logs you will find 300,000+ requests from Facebook. Previously, I only counted the facebookexternalhit/1.1, it seems that for each img tag, there are two hits i.e. one from externalhit version 1.0 and one from 1.1. I also tried Google during the test and you will find around 700 requests from Google.


Original article: http://chr13.com/2014/04/20/using-facebook-notes-to-ddos-any-website/#update
Source: http://dailyleet.com/using-facebook-notes-to-ddos-any-website/

събота, 10 май 2014 г.

Bind9 with DLZ and mysql backend... wait for it.. in Docker :)

Get yourself working docker.io installation.

Make doker file (its called Dockerfile)

#builddns image
#VERSION 0.1

FROM ubuntu:14.04
MAINTAINER Peach Lover <some@email.com>


RUN apt-get -qq update

Build image out of it

#docker build -t peach/builddns .

Start the docker container
#docker run -i -t -p 53:53/udp peach/builddns /bin/bash

Attach there and build some code






apt-get update
apt-get upgrade

apt-get install bind9 bind9utils build-essential debhelper hardening-wrapper libcap2-dev libdb-dev libdb-dev libkrb5-dev libldap2-dev libmysqlclient-dev libpq-dev libssl-dev libtool libxml2-dev mysql-client mysql-server openssl unixodbc unixodbc-dev
apt-get remove bind9
apt-get build-dep bind9

mkdir /root/bind9
cd /root/bind9
apt-get source bind9
cd bind9-9.9.5.dfsg

vi debian/rules
add the following
--with-dlz-mysql=yes

dpkg-buildpackage -rfakeroot -b

dpkg -i *.deb

vi /etc/default/bind9
OPTIONS="-u bind -n 1"

vi /etc/bind/named.conf.options
forwarders {
8.8.8.8;
8.8.4.4;
};

vi /etc/bind/named.conf.local
dlz "Mysql zone" {
database "mysql
{host=127.0.0.1 dbname=db_name user=db_user pass=db_pass}
{select zone from dns_records where zone = '$zone$'}
{select ttl, type, mx_priority, case when lower(type)='txt' then concat('\"', data, '\"') when lower(type) = 'soa' then concat_ws(' ', data, resp_person, serial, refresh, retry, expire, minimum) else data end from dns_records where zone = '$zone$' and host = '$record$'}";
};



mysql -p
create database db_name;
grant all privileges on db_name.* to db_user@localhost identified by 'db_pass';
CREATE TABLE `dns_records` ( `id` int(11) NOT NULL auto_increment, `zone` varchar(64) default NULL, `host` varchar(64) default NULL, `type` varchar(8) default NULL, `data` varchar(64) default NULL, `ttl` int(11) NOT NULL default '3600', `mx_priority` int(11) default NULL, `refresh` int(11) NOT NULL default '3600', `retry` int(11) NOT NULL default '3600', `expire` int(11) NOT NULL default '86400', `minimum` int(11) NOT NULL default '3600', `serial` bigint(20) NOT NULL default '2008082700', `resp_person` varchar(64) NOT NULL default 'resp.person.email', `primary_ns` varchar(64) NOT NULL default 'ns1.yourdns.here', `data_count` int(11) NOT NULL default '0', PRIMARY KEY (`id`), KEY `host` (`host`), KEY `zone` (`zone`), KEY `type` (`type`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1;

// for www.testie.local to resolve to 1.2.3.4
insert into dns_records (zone, host, type, data, mx_priority) values ('testie.local', 'www', 'A', '1.2.3.4', null);

// for testie.local to resolve to 1.2.3.4
insert into dns_records (zone, host, type, data, mx_priority) values ('testie.local', '@', 'A', '1.2.3.4', null);

// for www2.testie.local to alias to www.testie.local
// note the trailing period in the data field
insert into dns_records (zone, host, type, data, mx_priority) values ('testie.local', 'www2', 'CNAME', 'www.testie.local.', null);

// for mail for testie.local to go to testie.local
// note the trailing period in the data field
insert into dns_records (zone, host, type, data, mx_priority) values ('testie.local', '@', 'MX', 'testie.local.', '0');

# extra precaution to make sure packages dont update
for package in bind9 bind9-doc bind9-host bind9utils dnsutils ; do \
echo $package hold | dpkg --set-selections ; done




Test from your host (local on both container and hypervisor will work too since we forwarded port 53)

# dig @localhost testie.local

; <<>> DiG 9.9.5-3-Ubuntu <<>> @localhost testie.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15312
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;testie.local.            IN    A

;; ANSWER SECTION:
testie.local.        3600    IN    A    1.2.3.4

;; Query time: 1 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sat May 10 14:23:51 EEST 2014
;; MSG SIZE  rcvd: 57



BANG!


Two ways from now:
1. Clean up this container as much as possible.
Stop it. Commit and use it like that.


2. Get your debs. Get your configs. Place on the same folder as your Dockerfile and edit the Dockerfile (for config files you dont need RUN just add them to the correct places).
ADD somefile.deb /somewhere/somefile.deb
RUN dpkg -i /somewhere/somefile.deb

This will 1st copy the file then install it in the instance.

Also add
CMD ["/usr/sbin/named","-4","-u","bind","-n","1","-c","/etc/bind/named.conf","-f"]
at the end. This will be your run command when you start the container.
A nice little "-g" at the end will let you see all the logs that bind spits out when you just attach ... beware if you attach and then ^C you will stop your container instance.

build container
run it
congrats you have a brand new bind9 on a container with dlz-mysql.

Notice: since i am not planning to set mysql on the same container as bind i am not getting in depth of setting the mysql and records in the last Docker setup.

Make this yourself you lazy nerds!


сряда, 7 май 2014 г.

sysdig (поздрав за най-верния ми читател - вероятно единствения)

http://www.sysdig.org/
Sysdig is open source, system-level exploration: capture system state and activity from a running Linux instance, then save, filter and analyze.
Think of it as strace + tcpdump + lsof + awesome sauce.
With a little Lua cherry on top.

http://bencane.com/2014/04/18/using-sysdig-to-troubleshoot-like-a-boss/  - little guide

http://draios.com/fishing-for-hackers/ - What can be done with it

Слага в малкия си джоб swatch, aide и всичко което може да се изтиска от линукска система през syslog.

четвъртък, 24 април 2014 г.

test available ssl ciphers

testciphers.sh
#!/usr/bin/env bash

# OpenSSL requires the port number.
SERVER=$1 #host:port
DELAY=1
ciphers=$(openssl ciphers 'ALL:eNULL' | sed -e 's/:/ /g')

echo Obtaining cipher list from $(openssl version).

for cipher in ${ciphers[@]}
do
echo -n Testing $cipher...
result=$(echo -n | openssl s_client -cipher "$cipher" -connect $SERVER 2>&1)
if [[ "$result" =~ "Cipher is ${cipher}" ]] ; then
  echo YES
else
  if [[ "$result" =~ ":error:" ]] ; then
    error=$(echo -n $result | cut -d':' -f6)
    echo NO \($error\)
  else
    echo UNKNOWN RESPONSE
    echo $result
  fi
fi
sleep $DELAY
done

python SNMP v3 trap catcher

 Needs lots of work to be called finished but works this way too.


snmptrapcatcher.py

from pysnmp.entity import engine, config
from pysnmp.carrier.asynsock.dgram import udp
from pysnmp.entity.rfc3413 import ntfrcv
from pysnmp.proto.api import v2c
import time
from datetime import datetime

datecheck=str(datetime.now().strftime("%d%m%y"))
logfile=str(datetime.now().strftime("SNMP-%d-%m-%y.log"))
tolog=open(logfile, 'a')
# Create SNMP engine with autogenernated engineID and pre-bound
# to socket transport dispatcher
snmpEngine = engine.SnmpEngine()

# Transport setup

# UDP over IPv4
config.addSocketTransport(
    snmpEngine,
    udp.domainName,
    udp.UdpTransport().openServerMode(('0.0.0.0', 161))
)

# SNMPv3/USM setup

# user: usr-md5-none, auth: MD5, priv NONE
config.addV3User(
    snmpEngine, 'snmpuser',
    config.usmHMACMD5AuthProtocol, 'snmppassword!'
)

# Callback function for receiving notifications
def cbFun(snmpEngine,
          stateReference,
          contextEngineId, contextName,
          varBinds,
          cbCtx):
    global datecheck
    global logfile
    global tolog
    if str(datetime.now().strftime("%d%m%y")) != datecheck:
            logfile=str(datetime.now().strftime("SNMP-%d-%m-%y.log"))
            tolog=open(logfile, 'a')
       tolog.write(datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S'))
    tolog.write('\n')
    #tolog.write('Notification received, ContextEngineId "%s", ContextName "%s"' % (
    #    contextEngineId.prettyPrint(), contextName.prettyPrint()
    #    )
    #)
    for name, val in varBinds:
        tolog.write('%s = %s\n' % (name.prettyPrint(), val.prettyPrint()))
    tolog.flush()


# Register SNMP Application at the SNMP engine
ntfrcv.NotificationReceiver(snmpEngine, cbFun)

snmpEngine.transportDispatcher.jobStarted(1) # this job would never finish

# Run I/O dispatcher which would receive queries and send confirmations
try:
    snmpEngine.transportDispatcher.runDispatcher()
except:
    snmpEngine.transportDispatcher.closeDispatcher()
    raise

indexing vms on separate vmware ESXi servers

When there is no Enterprise license for ESXi there is usually more than one separate servers and if no proper documentation is kept you can easily forget where is your vm (on which hypervisor).

Here is a handy little python script that will show all vms on a list of hosts.
It uses pysphere for esx communication and yaml for config file. Also some commented parts that give more info if you uncomment.

getvms.py
#!/usr/bin/python

import yaml
from pysphere import *

def listvms(server,hv):
    vmlist = server.get_registered_vms()
    for i in vmlist:
        vm1 = server.get_vm_by_path(i)
        print ("%s -") % hv ,
        print vm1.get_properties()['name']



f = open('config.yaml')
config = yaml.load(f)
f.close()
hvlist = ['host1', 'host2', 'host3']
for hvserver in hvlist:
    server = VIServer()
    server.connect(hvserver, config["user"], config["pass"])
    #print "Connection established to host %s" % hvserver,
    #print server.get_server_type(), server.get_api_version()




    #vm1 = server.get_vm_by_path("[datastore1] cvs/cvs.vmx")
    """
    for i in vm1.get_properties():
        print i,
        print vm1.get_property(i)
    """

    listvms(server,hvserver)

    server.disconnect()


config.yaml
user: root
pass: passwordforuser

сряда, 23 април 2014 г.

Apache + LDAP + SSL + Proxy frontend for proprietary web application that offers no authentication.

 Lets say we have a web application that we can't/don't want to tamper with and offers no authentication. It runs on port 8000. Offers no ssl.

Do block port 8000 on all interfaces except for 127.0.0.1.
iptables -A INPUT -p tcp -d !127.0.0.1 --dport 8000 -j DROP

Use the following configuration for apache.

Listen 80
Listen 443
LDAPVerifyServerCert off
LDAPTrustedMode SSL
LDAPTrustedGlobalCert CERT_BASE64 /etc/httpd/cert1.pem
LDAPTrustedGlobalCert KEY_BASE64 /etc/httpd/key1.pem
NameVirtualHost *:443
<VirtualHost *:443>
         ProxyRequests Off
         ProxyPreserveHost On
         ProxyPass / http://127.0.0.1:8000/
         ProxyPassReverse / http://127.0.0.1:8000/
         SSLEngine on
         SSLCertificateFile /etc/httpd/webssl.cer
         SSLCertificateKeyFile /etc/httpd/webssl.key
        <Location />
                Order deny,allow
                Allow from all
                AuthLDAPBindDN "CN=LDAP Query,CN=Users,DC=dc1,DC=example,DC=net"
                AuthLDAPBindPassword "LDAP PASSWORD FOR BIND USER"
                # search user
                AuthLDAPURL "ldap://dc1.example.net:636/CN=Users,DC=dc1,DC=example,DC=net?sAMAccountName?sub?(objectClass=*)" SSL
                AuthType Basic
                AuthName "Password Required"
                Require valid-user
                AuthBasicProvider ldap
        </Location>
</VirtualHost>
# Separate virtual host running on port 80 to rewrite http to https because the application return urls with http
<VirtualHost *:80>
         RewriteEngine On
         RewriteCond %{HTTPS} off
         RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
</VirtualHost>

Windows Events to Syslog

 To have all logs transfered in the same way on linux dominating network here is a great tool:  
https://code.google.com/p/eventlog-to-syslog/

Also accepts filtering using XPath expressions.

RTFM: https://eventlog-to-syslog.googlecode.com/files/Readme_4.5.0.pdf

Regex web testers and tutorials/books

Inspired by regexbuddy. Works pretty fine and is web so can be used if you are away of your usual workplace and preffered software

Here is a handy guide: http://www.regular-expressions.info/quickstart.html

Still work in progress but since i like the author here is a "Learn the hard way" book on regex: http://regex.learncodethehardway.org/book/

openssl Commands

Source:  https://www.sslshopper.com/article-most-common-openssl-commands.html

 Again copy/pasta to have it near when needed:

General OpenSSL Commands

These commands allow you to generate CSRs, Certificates, Private Keys and do other miscellaneous tasks.
  • Generate a new private key and Certificate Signing Request
    openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key
  • Generate a self-signed certificate (see How to Create and Install an Apache Self Signed Certificate for more info)
    openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt
  • Generate a certificate signing request (CSR) for an existing private key
    openssl req -out CSR.csr -key privateKey.key -new
  • Generate a certificate signing request based on an existing certificate
    openssl x509 -x509toreq -in certificate.crt -out CSR.csr -signkey privateKey.key
  • Remove a passphrase from a private key
    openssl rsa -in privateKey.pem -out newPrivateKey.pem

Checking Using OpenSSL

If you need to check the information within a Certificate, CSR or Private Key, use these commands. You can also check CSRs and check certificates using our online tools.
  • Check a Certificate Signing Request (CSR)
    openssl req -text -noout -verify -in CSR.csr
  • Check a private key
    openssl rsa -in privateKey.key -check
  • Check a certificate
    openssl x509 -in certificate.crt -text -noout
  • Check a PKCS#12 file (.pfx or .p12)
    openssl pkcs12 -info -in keyStore.p12

Debugging Using OpenSSL

If you are receiving an error that the private doesn't match the certificate or that a certificate that you installed to a site is not trusted, try one of these commands. If you are trying to verify that an SSL certificate is installed correctly, be sure to check out the SSL Checker.
  • Check an MD5 hash of the public key to ensure that it matches with what is in a CSR or private key
    openssl x509 -noout -modulus -in certificate.crt | openssl md5
    openssl rsa -noout -modulus -in privateKey.key | openssl md5
    openssl req -noout -modulus -in CSR.csr | openssl md5
  • Check an SSL connection. All the certificates (including Intermediates) should be displayed
    openssl s_client -connect www.paypal.com:443

Converting Using OpenSSL

These commands allow you to convert certificates and keys to different formats to make them compatible with specific types of servers or software. For example, you can convert a normal PEM file that would work with Apache to a PFX (PKCS#12) file and use it with Tomcat or IIS. Use our SSL Converter to convert certificates without messing with OpenSSL.
  • Convert a DER file (.crt .cer .der) to PEM
    openssl x509 -inform der -in certificate.cer -out certificate.pem
  • Convert a PEM file to DER
    openssl x509 -outform der -in certificate.pem -out certificate.der
  • Convert a PKCS#12 file (.pfx .p12) containing a private key and certificates to PEM
    openssl pkcs12 -in keyStore.pfx -out keyStore.pem -nodes
    You can add -nocerts to only output the private key or add -nokeys to only output the certificates.
  • Convert a PEM certificate file and a private key to PKCS#12 (.pfx .p12)
    openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key -in certificate.crt -certfile CACert.crt

Java Keystore commands

Source:  http://www.sslshopper.com/article-most-common-java-keytool-keystore-commands.html

 

I will copy/pasta the part that i use so I will have it handy here.

 

Java Keytool Commands for Creating and Importing

These commands allow you to generate a new Java Keytool keystore file, create a CSR, and import certificates. Any root or intermediate certificates will need to be imported before importing the primary certificate for your domain.
  • Generate a Java keystore and key pair keytool -genkey -alias mydomain -keyalg RSA -keystore keystore.jks -keysize 2048
  • Generate a certificate signing request (CSR) for an existing Java keystore keytool -certreq -alias mydomain -keystore keystore.jks -file mydomain.csr
  • Import a root or intermediate CA certificate to an existing Java keystore keytool -import -trustcacerts -alias root -file Thawte.crt -keystore keystore.jks
  • Import a signed primary certificate to an existing Java keystore keytool -import -trustcacerts -alias mydomain -file mydomain.crt -keystore keystore.jks
  • Generate a keystore and self-signed certificate (see How to Create a Self Signed Certificate using Java Keytool for more info) keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass password -validity 360 -keysize 2048

Java Keytool Commands for Checking

If you need to check the information within a certificate, or Java keystore, use these commands.
  • Check a stand-alone certificate keytool -printcert -v -file mydomain.crt
  • Check which certificates are in a Java keystore keytool -list -v -keystore keystore.jks
  • Check a particular keystore entry using an alias keytool -list -v -keystore keystore.jks -alias mydomain

Other Java Keytool Commands

  • Delete a certificate from a Java Keytool keystore keytool -delete -alias mydomain -keystore keystore.jks
  • Change a Java keystore password keytool -storepasswd -new new_storepass -keystore keystore.jks
  • Export a certificate from a keystore keytool -export -alias mydomain -file mydomain.crt -keystore keystore.jks
  • List Trusted CA Certs keytool -list -v -keystore $JAVA_HOME/jre/lib/security/cacerts
  • Import New CA into Trusted Certs keytool -import -trustcacerts -file /path/to/ca/ca.pem -alias CA_ALIAS -keystore $JAVA_HOME/jre/lib/security/cacerts

вторник, 22 април 2014 г.

openssl verify

Make a folder to contain your public certificates:

#mkdir certs
#cd certs

Get public cert for the server you want to check:
#openssl s_client -showcerts -connect server:port

Copy from the "-----BEGIN CERTIFICATE-----" to the "-----END CERTIFICATE-----" , and save it in a file ending in .pem

Get issuer (CA) root certificate ("Certification Authority Root Certificate")
should be provided by your issuer or if you are your own CA you should know how to get this. Place it in the same directory as the certificate of your server (the one you are testing).

 Rehash the certificates. This is basically creating a link files to your .pem files. Names are based on the certificate content so openssl command will be able to operate on the files.

#for file in *.pem; do ln -s $file `openssl x509 -hash -noout -in $file`.0; done

Verify the certificate:

#openssl s_client -CApath . -connect server:port

Output should be similar to:
..
..
..
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : DES-CBC3-SHA
    Session-ID: 53563D55F85CD643713643B7163A8C25113B114703C975DEA1C57D659FFBF96E
    Session-ID-ctx:
    Master-Key: 7288C083E0723BC61C4C21DC91908E34BD5C65695064E4E114FF4ED763ECA1D489794B9911E69021B8A8083A9CAB18EE
    Key-Arg   : None
    Krb5 Principal: None
    PSK identity: None
    PSK identity hint: None
    Start Time: 1398160725
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)
---
..
..

If you see Verify return code: 0 (ok) you are good!

понеделник, 10 март 2014 г.

AVD - Android Virtual Device - Scumbag Google

Преди време когато за пръв път тествах възможностите на емулатора включен в Android SDK-то, Google Store (по настоящем Google Play) присъстваше по подразбиране във всеки имидж който се предоставя за изтегляне от AVD-то. Сега? Сега мнението е, че правото да се използва магазина (особено за безплатните приложения) идва със закупуването на андроид устройство. Имиджите не включват "бялата торба". Отварянето на play.google.com и опита да се инсталира приложение от там води до неприятно съобщение че не сме използвали приложението (пак бялата торба) и съответно нямаме устройства добавени към акаунта въпреки че сме логнати със съответния акаунт в сайта.

Всичко това ме навежда на мисълта, че когато един производител създава устройство той си плаща на гугъл за правото да ползва андроид. Заедно с факта, че андроид е до голяма степен линук а това не се споменава официално никъде - започвам да си мисля дали парите "изкарани с пот на челата" ни, тези които даваме за да притежаваме и използваме умно устройство не са горивото за изграждане на една зла империя?
Струва си да споменем и постоянно присъстващия спам (реклами) в огромна част от приложенията.

Лично на мен започва да ми писва от тази политика която провеждат!

А сега към малко по-техническа част на тази тема.

Как да добавим бялата торба в имиджа на емулатора
Android 2.3 (защото това ми трябваше)

На налично устройство с андроид (телефона Ви например) инсталирайте стара версия на Astro File Explorer (новите които имат Cloud в името не вършат работа). От менюто изберете Tools-> Backup Application и изберете Google Store. За дестинация изберете SD картата. Това ще създаде apk файл vending.нещо.apk. Вземете този файл.

От /system/app/ вземете GoogleLoginServices.apk

Използвайте инструмент за едитване на Yaffs2 файлова система 
- Няколко варианта: yaffs-tools, kernel support или лесния вариант, свалете Yaffey и го инсталирайте на някоя Windows машина. Според доста постове Yaffey чийто сорс е наличен и използва QT може да се компилира и под Линукс, но за мен беше по-лесно да ползвам Windows за да спестя малко време с това.



Отваряте system.img с Yaffey за системата която емулирате и под /system/app импортвате 2та apk файла. Запазвате имиджа и на следващия boot имате 'бяла торба' . Логвате се и щастливо си инсталирате приложения от стора.


General hints:

- Под Linux , AVD няма подръжка за Hardware Execution Manager (HAXM).
Свалете си имидж за Intel Atom в AVD и инсталирайте libvirt с KVM. Добавете потребителя с който работите в съответните групи на libvirt и без допълнителни настройки емулатора в се ползва с хардуерна емулация. Работи супер бързо и мазно.

- Искате да снимате видео на Android 2.3 скрийна в емулатор. Няма приложение което да работи от самия Android.

Инсталирайте Аshot http://sourceforge.net/projects/ashot/

от него можете да снимате поредица от картинки и да ги свържете във видео. Решението е грозно, но за момента друго не съм намерил (под 2.3).

Същото нещо за Android 4.4 се прави с приложение което се намира на самия имидж но за това по-късно когато мога да го пробвам.