考慮以下兩個(gè)字符串:
1. ?for (int i = 0; i < b.size(); i++) {
?
2.?do something in English (not necessary to be a sentence).
?
第一個(gè)是Java代碼,第二個(gè)是英文。如何檢測(cè)第一個(gè)是代碼,第二個(gè)是英文?
Java 代碼可能無(wú)法解析,因?yàn)樗皇峭暾姆椒?語(yǔ)句/表達(dá)式。下面為這個(gè)問題提供了一個(gè)解決方案。由于有時(shí)代碼和英文之間沒有明確的界限,準(zhǔn)確度不可能是 100%。但是,使用下面的解決方案,你可以輕松調(diào)整程序以滿足你的需求。
基本思想是將字符串轉(zhuǎn)換為一組標(biāo)記。例如,上面的代碼行可能會(huì)變成“?KEY,SEPARATOR,ID,ASSIGN,NUMBER,SEPARATOR,...
?”。然后我們可以使用簡(jiǎn)單的規(guī)則將代碼與英文分開。
標(biāo)記器類將字符串轉(zhuǎn)換為標(biāo)記列表。
package lexical;
import java.util.LinkedList;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class Tokenizer {
private class TokenInfo {
public final Pattern regex;
public final int token;
public TokenInfo(Pattern regex, int token) {
super();
this.regex = regex;
this.token = token;
}
}
public class Token {
public final int token;
public final String sequence;
public Token(int token, String sequence) {
super();
this.token = token;
this.sequence = sequence;
}
}
private LinkedList<TokenInfo> tokenInfos;
private LinkedList<Token> tokens;
public Tokenizer() {
tokenInfos = new LinkedList<TokenInfo>();
tokens = new LinkedList<Token>();
}
public void add(String regex, int token) {
tokenInfos
.add(new TokenInfo(Pattern.compile("^(" + regex + ")"), token));
}
public void tokenize(String str) {
String s = str.trim();
tokens.clear();
while (!s.equals("")) {
//System.out.println(s);
boolean match = false;
for (TokenInfo info : tokenInfos) {
Matcher m = info.regex.matcher(s);
if (m.find()) {
match = true;
String tok = m.group().trim();
s = m.replaceFirst("").trim();
tokens.add(new Token(info.token, tok));
break;
}
}
if (!match){
//throw new ParserException("Unexpected character in input: " + s);
tokens.clear();
System.out.println("Unexpected character in input: " + s);
return;
}
}
}
public LinkedList<Token> getTokens() {
return tokens;
}
public String getTokensString() {
StringBuilder sb = new StringBuilder();
for (Tokenizer.Token tok : tokens) {
sb.append(tok.token);
}
return sb.toString();
}
}
我們可以得到Java的關(guān)鍵字、分隔符、運(yùn)算符、標(biāo)識(shí)符等,如果我們給token分配一個(gè)映射值,就可以將一個(gè)英文字符串轉(zhuǎn)換為一個(gè)token字符串。
package lexical;
import greenblocks.javaapiexamples.DB;
import java.io.IOException;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.commons.lang.StringUtils;
import NLP.POSTagger;
public class EnglishOrCode {
private static Tokenizer tokenizer = null;
public static void initializeTokenizer() {
tokenizer = new Tokenizer();
//key words
String keyString = "abstract assert boolean break byte case catch "
+ "char class const continue default do double else enum"
+ " extends false final finally float for goto if implements "
+ "import instanceof int interface long native new null "
+ "package private protected public return short static "
+ "strictfp super switch synchronized this throw throws true "
+ "transient try void volatile while todo";
String[] keys = keyString.split(" ");
String keyStr = StringUtils.join(keys, "|");
tokenizer.add(keyStr, 1);
tokenizer.add("\\(|\\)|\\{|\\}|\\[|\\]|;|,|\\.|=|>|<|!|~|"
+ "\\?|:|==|<=|>=|!=|&&|\\|\\||\\+\\+|--|"
+ "\\+|-|\\*|/|&|\\||\\^|%|\'|\"|\n|\r|\\$|\\#",
2);//separators, operators, etc
tokenizer.add("[0-9]+", 3); //number
tokenizer.add("[a-zA-Z][a-zA-Z0-9_]*", 4);//identifier
tokenizer.add("@", 4);
}
public static void main(String[] args) throws SQLException, ClassNotFoundException, IOException {
initializeTokenizer();
String s = "do something in English";
if(isEnglish(s)){
System.out.println("English");
}else{
System.out.println("Java Code");
}
s = "for (int i = 0; i < b.size(); i++) {";
if(isEnglish(s)){
System.out.println("English");
}else{
System.out.println("Java Code");
}
}
private static boolean isEnglish(String replaced) {
tokenizer.tokenize(replaced);
String patternString = tokenizer.getTokensString();
if(patternString.matches(".*444.*") || patternString.matches("4+")){
return true;
}else{
return false;
}
}
}
輸出:
English Java Code
本篇關(guān)于如何判斷一個(gè)字符串是英文還是 Java 代碼的內(nèi)容就到此結(jié)束了,感謝各位讀者的閱讀。